Test Report: Docker_Linux 21655

                    
                      f8e963384863fe0b9099940b8c321271fa941d51:2025-09-29:41681
                    
                

Test fail (12/341)

x
+
TestFunctional/parallel/DashboardCmd (301.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-113333 --alsologtostderr -v=1] stderr:
I0929 11:20:10.865322  411898 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:10.865597  411898 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:10.865607  411898 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:10.865612  411898 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:10.865811  411898 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:10.866138  411898 mustload.go:65] Loading cluster: functional-113333
I0929 11:20:10.866538  411898 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:10.866997  411898 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:10.886995  411898 host.go:66] Checking if "functional-113333" exists ...
I0929 11:20:10.887318  411898 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 11:20:10.948699  411898 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 11:20:10.936262235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
I0929 11:20:10.948816  411898 api_server.go:166] Checking apiserver status ...
I0929 11:20:10.948856  411898 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 11:20:10.948923  411898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:10.971777  411898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:11.079131  411898 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9442/cgroup
W0929 11:20:11.091378  411898 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/9442/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I0929 11:20:11.091464  411898 ssh_runner.go:195] Run: ls
I0929 11:20:11.095491  411898 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 11:20:11.099856  411898 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 11:20:11.099911  411898 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 11:20:11.100058  411898 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:11.100073  411898 addons.go:69] Setting dashboard=true in profile "functional-113333"
I0929 11:20:11.100079  411898 addons.go:238] Setting addon dashboard=true in "functional-113333"
I0929 11:20:11.100107  411898 host.go:66] Checking if "functional-113333" exists ...
I0929 11:20:11.100403  411898 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:11.121079  411898 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 11:20:11.122453  411898 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 11:20:11.124376  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 11:20:11.124399  411898 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 11:20:11.124469  411898 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:11.141727  411898 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:11.251510  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 11:20:11.251538  411898 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 11:20:11.272714  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 11:20:11.272736  411898 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 11:20:11.291548  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 11:20:11.291572  411898 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 11:20:11.312515  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 11:20:11.312540  411898 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 11:20:11.335893  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 11:20:11.335924  411898 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 11:20:11.355911  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 11:20:11.355938  411898 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 11:20:11.375628  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 11:20:11.375659  411898 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 11:20:11.395416  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 11:20:11.395439  411898 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 11:20:11.414477  411898 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:20:11.414502  411898 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 11:20:11.432605  411898 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 11:20:11.883051  411898 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-113333 addons enable metrics-server

                                                
                                                
I0929 11:20:11.884043  411898 addons.go:201] Writing out "functional-113333" config to set dashboard=true...
W0929 11:20:11.884315  411898 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 11:20:11.885218  411898 kapi.go:59] client config for functional-113333: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt", KeyFile:"/home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.key", CAFile:"/home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil),
NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f41c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 11:20:11.885821  411898 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 11:20:11.885843  411898 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 11:20:11.885851  411898 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 11:20:11.885861  411898 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 11:20:11.885867  411898 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 11:20:11.894655  411898 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  63763807-fce1-4133-8023-0bc523388a1a 877 0 2025-09-29 11:20:11 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 11:20:11 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.98.241.54,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.98.241.54],IPFamilies:[IPv4],AllocateLoadBalancerN
odePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 11:20:11.894835  411898 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 11:20:11.894930  411898 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-113333 proxy --port 36195]
I0929 11:20:11.895221  411898 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 11:20:11.949189  411898 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 11:20:11.949271  411898 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 11:20:11.959677  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d5a9c3b-02b2-46c3-a734-ef495c3a21ce] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b6000 TLS:<nil>}
I0929 11:20:11.959793  411898 retry.go:31] will retry after 105.29µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.964293  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1c81b15c-3cb2-4a27-9fa4-3f6d8a854643] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bc840 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfb80 TLS:<nil>}
I0929 11:20:11.964382  411898 retry.go:31] will retry after 80.446µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.970525  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a70ea1cd-7fea-43f4-b7a4-d5682b48ede6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfcc0 TLS:<nil>}
I0929 11:20:11.970618  411898 retry.go:31] will retry after 163.915µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.975065  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[979c94dd-cf15-40b7-a0cb-afc1176ef9fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4a00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002b7e00 TLS:<nil>}
I0929 11:20:11.975133  411898 retry.go:31] will retry after 262.248µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.979398  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b6463c17-316c-471e-9587-65f250b5a5fe] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bc980 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0003bfe00 TLS:<nil>}
I0929 11:20:11.979535  411898 retry.go:31] will retry after 673.432µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.986589  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[417da0f1-1431-48e9-afd8-443566220a76] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0007d4b40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a000 TLS:<nil>}
I0929 11:20:11.986718  411898 retry.go:31] will retry after 518.032µs: Temporary Error: unexpected response code: 503
I0929 11:20:11.990827  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d030e557-6a5d-4701-8829-6e11a782084b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bca80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698000 TLS:<nil>}
I0929 11:20:11.990914  411898 retry.go:31] will retry after 1.115122ms: Temporary Error: unexpected response code: 503
I0929 11:20:11.994581  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8913a13a-080e-46ff-8452-ff23bc48df09] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:11 GMT]] Body:0xc0014bcb00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698140 TLS:<nil>}
I0929 11:20:11.994686  411898 retry.go:31] will retry after 2.500646ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.000576  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b99ae79a-1e57-4f2b-b759-102ef2f22733] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d4d00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a140 TLS:<nil>}
I0929 11:20:12.000638  411898 retry.go:31] will retry after 2.355685ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.006280  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a926b466-a496-483d-884b-f5cc49e4f184] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcc00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698280 TLS:<nil>}
I0929 11:20:12.006340  411898 retry.go:31] will retry after 3.030698ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.011899  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[937e0bb0-387a-469b-a8e9-53e1f8e4c03b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d4fc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a280 TLS:<nil>}
I0929 11:20:12.011959  411898 retry.go:31] will retry after 5.967932ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.021145  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cca3875d-5067-4a77-b787-92756a163bee] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d57c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016983c0 TLS:<nil>}
I0929 11:20:12.021210  411898 retry.go:31] will retry after 4.693221ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.028937  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[222e8599-df90-4d7c-899c-bb617379c70b] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d58c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698500 TLS:<nil>}
I0929 11:20:12.028996  411898 retry.go:31] will retry after 7.945867ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.040000  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[03b7a232-c811-48f9-8814-538f251cddd7] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcd00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698640 TLS:<nil>}
I0929 11:20:12.040068  411898 retry.go:31] will retry after 23.567073ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.067330  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[0ac47959-9bb3-4b96-b701-bb61c0ff7a6f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd480 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a3c0 TLS:<nil>}
I0929 11:20:12.067401  411898 retry.go:31] will retry after 16.545954ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.087373  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d5b555d6-d9e9-4aad-b5c6-38ca1c256940] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bce00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707540 TLS:<nil>}
I0929 11:20:12.087439  411898 retry.go:31] will retry after 61.273935ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.152673  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b569584d-2ee9-48fb-8b69-eafb009198dc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd5c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a500 TLS:<nil>}
I0929 11:20:12.152749  411898 retry.go:31] will retry after 68.202803ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.225222  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[d74e93b4-9a99-4493-bdb2-9b7b0a0d9dfc] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd680 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707680 TLS:<nil>}
I0929 11:20:12.225296  411898 retry.go:31] will retry after 82.210728ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.311414  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[e2b93faf-11bd-495d-9529-7cb40479f34f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcec0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0007077c0 TLS:<nil>}
I0929 11:20:12.311494  411898 retry.go:31] will retry after 147.243651ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.462763  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ffc7f095-3b73-4f8f-b3ec-5e44a3e4434c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0015fd780 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a640 TLS:<nil>}
I0929 11:20:12.462861  411898 retry.go:31] will retry after 162.588755ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.628641  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3b921a98-7496-4b23-873a-aae50a5e9353] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0007d5c80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707900 TLS:<nil>}
I0929 11:20:12.628702  411898 retry.go:31] will retry after 242.946834ms: Temporary Error: unexpected response code: 503
I0929 11:20:12.874969  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43de276e-0b91-4020-8ae6-69032db4bf7d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:12 GMT]] Body:0xc0014bcfc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698780 TLS:<nil>}
I0929 11:20:12.875027  411898 retry.go:31] will retry after 495.346739ms: Temporary Error: unexpected response code: 503
I0929 11:20:13.373551  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ad4c6322-c804-486d-8b5a-474eb19398bc] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:13 GMT]] Body:0xc0015fd8c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a780 TLS:<nil>}
I0929 11:20:13.373619  411898 retry.go:31] will retry after 1.047679097s: Temporary Error: unexpected response code: 503
I0929 11:20:14.425150  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[56e8593a-ffee-411f-9e83-9c1ca2b68d2e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:14 GMT]] Body:0xc0014bd0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000707a40 TLS:<nil>}
I0929 11:20:14.425225  411898 retry.go:31] will retry after 1.275988625s: Temporary Error: unexpected response code: 503
I0929 11:20:15.704702  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[f86bbcc1-034c-4e80-97d1-b8398af855db] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:15 GMT]] Body:0xc0008685c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170a8c0 TLS:<nil>}
I0929 11:20:15.704770  411898 retry.go:31] will retry after 1.44204104s: Temporary Error: unexpected response code: 503
I0929 11:20:17.149899  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c5f4c7ad-dff2-4d2b-b641-33c0b39e1324] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:17 GMT]] Body:0xc000868740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa3c0 TLS:<nil>}
I0929 11:20:17.149965  411898 retry.go:31] will retry after 3.389070842s: Temporary Error: unexpected response code: 503
I0929 11:20:20.545016  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[bf21411c-2a0b-41b1-9d1d-de79c63ae8f1] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:20 GMT]] Body:0xc0007d5dc0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa500 TLS:<nil>}
I0929 11:20:20.545112  411898 retry.go:31] will retry after 4.613906702s: Temporary Error: unexpected response code: 503
I0929 11:20:25.164847  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c34844d-f01b-4c66-a591-13286faeba86] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:25 GMT]] Body:0xc000868940 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa640 TLS:<nil>}
I0929 11:20:25.164941  411898 retry.go:31] will retry after 7.574140968s: Temporary Error: unexpected response code: 503
I0929 11:20:32.742654  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c98433e6-656c-4a1d-8cdb-e20ecae0b812] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:32 GMT]] Body:0xc0014bd1c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0016988c0 TLS:<nil>}
I0929 11:20:32.742718  411898 retry.go:31] will retry after 5.540934918s: Temporary Error: unexpected response code: 503
I0929 11:20:38.287802  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[05992f34-d3dd-47b3-a301-a176a8bfc90c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:38 GMT]] Body:0xc000868a40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc001698a00 TLS:<nil>}
I0929 11:20:38.287917  411898 retry.go:31] will retry after 16.896410782s: Temporary Error: unexpected response code: 503
I0929 11:20:55.188361  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2d6d620a-3b40-4f5d-881c-8ed715c20187] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:20:55 GMT]] Body:0xc000868c40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc00170aa00 TLS:<nil>}
I0929 11:20:55.188434  411898 retry.go:31] will retry after 10.347207584s: Temporary Error: unexpected response code: 503
I0929 11:21:05.542204  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b595154a-318d-4c9a-a281-5a750c413e34] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:21:05 GMT]] Body:0xc0014bd2c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aa780 TLS:<nil>}
I0929 11:21:05.542276  411898 retry.go:31] will retry after 38.613353795s: Temporary Error: unexpected response code: 503
I0929 11:21:44.160767  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[c776ccdd-527a-4fb3-8f78-25cc1b4ea042] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:21:44 GMT]] Body:0xc000868e40 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0005aac80 TLS:<nil>}
I0929 11:21:44.160855  411898 retry.go:31] will retry after 1m1.828281956s: Temporary Error: unexpected response code: 503
I0929 11:22:45.992898  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[619da1b5-1db4-43f5-8932-26e8e15ce582] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:22:45 GMT]] Body:0xc0015fc0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206280 TLS:<nil>}
I0929 11:22:45.992974  411898 retry.go:31] will retry after 50.195696598s: Temporary Error: unexpected response code: 503
I0929 11:23:36.192435  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5c76df89-7ed2-4b83-88f5-651db017321c] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:23:36 GMT]] Body:0xc0014bc040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc0002063c0 TLS:<nil>}
I0929 11:23:36.192533  411898 retry.go:31] will retry after 55.964495296s: Temporary Error: unexpected response code: 503
I0929 11:24:32.160540  411898 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[3706c8b2-ed48-42a7-8bb2-ff7c07b61c65] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 11:24:32 GMT]] Body:0xc0015fc0c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0xc000206640 TLS:<nil>}
I0929 11:24:32.160659  411898 retry.go:31] will retry after 45.381762389s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-113333
helpers_test.go:243: (dbg) docker inspect functional-113333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
	        "Created": "2025-09-29T11:17:04.817558805Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 391650,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:17:04.849941498Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hosts",
	        "LogPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8-json.log",
	        "Name": "/functional-113333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-113333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-113333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
	                "LowerDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-113333",
	                "Source": "/var/lib/docker/volumes/functional-113333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-113333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-113333",
	                "name.minikube.sigs.k8s.io": "functional-113333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a211ba94c8850961796fb0b95cdec4d53ee08039011b058eabdfa970d2029d85",
	            "SandboxKey": "/var/run/docker/netns/a211ba94c885",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-113333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:42:67:f3:c0:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90b72701a62f4e5c7a3409fa4bb2ab5e9e99c71d1e536f1b56e4a3c618dc646d",
	                    "EndpointID": "049ef9c51ec99d3d8642aca3df3c234d511cfe97279244292d3363d54e2d7fca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-113333",
	                        "0e969f65a5f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-113333 -n functional-113333
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 logs -n 25
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-113333 ssh stat /mount-9p/created-by-pod                                                                               │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh sudo umount -f /mount-9p                                                                                    │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdspecific-port3676981704/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh -- ls -la /mount-9p                                                                                         │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh sudo umount -f /mount-9p                                                                                    │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount2 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount3 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount1                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount1 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount1                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh findmnt -T /mount2                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh findmnt -T /mount3                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ mount          │ -p functional-113333 --kill=true                                                                                                  │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format short --alsologtostderr                                                                       │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format yaml --alsologtostderr                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh pgrep buildkitd                                                                                             │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ image          │ functional-113333 image build -t localhost/my-image:functional-113333 testdata/build --alsologtostderr                            │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls                                                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format json --alsologtostderr                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format table --alsologtostderr                                                                       │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:20:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:20:04.491921  409081 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:04.492007  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492014  409081 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:04.492018  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492320  409081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:20:04.492755  409081 out.go:368] Setting JSON to false
	I0929 11:20:04.493767  409081 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3748,"bootTime":1759141056,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:20:04.493856  409081 start.go:140] virtualization: kvm guest
	I0929 11:20:04.495673  409081 out.go:179] * [functional-113333] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:20:04.496907  409081 notify.go:220] Checking for updates...
	I0929 11:20:04.496966  409081 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:20:04.498242  409081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:04.499707  409081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 11:20:04.501035  409081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 11:20:04.505457  409081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:20:04.506863  409081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:20:04.509025  409081 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:20:04.509717  409081 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:04.536233  409081 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:20:04.536391  409081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:04.596439  409081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-29 11:20:04.586118728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:20:04.596617  409081 docker.go:318] overlay module found
	I0929 11:20:04.598520  409081 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 11:20:04.599774  409081 start.go:304] selected driver: docker
	I0929 11:20:04.599789  409081 start.go:924] validating driver "docker" against &{Name:functional-113333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-113333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:20:04.599895  409081 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:20:04.603063  409081 out.go:203] 
	W0929 11:20:04.604206  409081 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:20:04.605379  409081 out.go:203] 
	
	
	==> Docker <==
	Sep 29 11:20:15 functional-113333 dockerd[6858]: time="2025-09-29T11:20:15.328706044Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:17 functional-113333 dockerd[6858]: 2025/09/29 11:20:17 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Sep 29 11:20:19 functional-113333 dockerd[6858]: time="2025-09-29T11:20:19.343821858Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.248275811Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.279664315Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.297560529Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.327949749Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:38 functional-113333 dockerd[6858]: time="2025-09-29T11:20:38.320400297Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:46 functional-113333 dockerd[6858]: time="2025-09-29T11:20:46.319883738Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.245713777Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.272860343Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.247335940Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.277658815Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:25 functional-113333 dockerd[6858]: time="2025-09-29T11:21:25.325203091Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.249257351Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.280402159Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.248774060Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.278362990Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:40 functional-113333 dockerd[6858]: time="2025-09-29T11:21:40.317234122Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:22:48 functional-113333 dockerd[6858]: time="2025-09-29T11:22:48.354941240Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.250100634Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.278209097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:23:05 functional-113333 dockerd[6858]: time="2025-09-29T11:23:05.329781423Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.250410392Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.279213587Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	813edc572aee3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   756b234fa6e2a       busybox-mount
	797ed74fc1800       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   69f30c3f27ac7       hello-node-connect-7d85dfc575-pvq4m
	f19913170bea1       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         5 minutes ago       Running             nginx                     0                   74ea6477a50a8       nginx-svc
	9233722b13058       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   5174dba697c69       hello-node-75c85bcc94-524nr
	f228fbf887997       df0860106674d                                                                                         5 minutes ago       Running             kube-proxy                3                   d14826ecc1e95       kube-proxy-kp4d8
	66ddd141ec1f6       52546a367cc9e                                                                                         5 minutes ago       Running             coredns                   2                   0daa4d953b658       coredns-66bc5c9577-ndt25
	0c1510903edfc       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       3                   e14d4154c78df       storage-provisioner
	a34f86dc27328       5f1f5298c888d                                                                                         5 minutes ago       Running             etcd                      2                   3ec9b5756cc18       etcd-functional-113333
	264a78f9985e9       90550c43ad2bc                                                                                         5 minutes ago       Running             kube-apiserver            0                   cac40828278ac       kube-apiserver-functional-113333
	1153b7ac7d169       46169d968e920                                                                                         5 minutes ago       Running             kube-scheduler            3                   4466a2147b50c       kube-scheduler-functional-113333
	f40ad3c8f099f       a0af72f2ec6d6                                                                                         5 minutes ago       Running             kube-controller-manager   2                   42f7aadb66137       kube-controller-manager-functional-113333
	f92f6d64d6929       46169d968e920                                                                                         5 minutes ago       Exited              kube-scheduler            2                   ba17dfc161521       kube-scheduler-functional-113333
	a13393a00a30d       df0860106674d                                                                                         5 minutes ago       Exited              kube-proxy                2                   871e0c1c685a0       kube-proxy-kp4d8
	b3296caa44f98       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       2                   3ae050bca60a4       storage-provisioner
	ebb584477fb59       52546a367cc9e                                                                                         6 minutes ago       Exited              coredns                   1                   c858f76b2e6af       coredns-66bc5c9577-ndt25
	fe534996d3885       a0af72f2ec6d6                                                                                         6 minutes ago       Exited              kube-controller-manager   1                   26caa1f2477bb       kube-controller-manager-functional-113333
	d15759c72f024       5f1f5298c888d                                                                                         6 minutes ago       Exited              etcd                      1                   daea5fbf20513       etcd-functional-113333
	
	
	==> coredns [66ddd141ec1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47174 - 22489 "HINFO IN 8566101316675011462.5533812213724835804. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016422216s
	
	
	==> coredns [ebb584477fb5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49493 - 53604 "HINFO IN 1223955324215989705.3505866021153624538. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.425693464s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-113333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-113333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=functional-113333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_17_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:17:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-113333
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:25:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-113333
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2c1ed2445d24531beaede9409d240bc
	  System UUID:                0575d937-ba65-482d-bfc6-2fea38fe2d9c
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-524nr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  default                     hello-node-connect-7d85dfc575-pvq4m           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m16s
	  default                     mysql-5bb876957f-7fc8m                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m4s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m17s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  kube-system                 coredns-66bc5c9577-ndt25                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     7m46s
	  kube-system                 etcd-functional-113333                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         7m52s
	  kube-system                 kube-apiserver-functional-113333              250m (3%)     0 (0%)      0 (0%)           0 (0%)         5m40s
	  kube-system                 kube-controller-manager-functional-113333     200m (2%)     0 (0%)      0 (0%)           0 (0%)         7m53s
	  kube-system                 kube-proxy-kp4d8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kube-system                 kube-scheduler-functional-113333              100m (1%)     0 (0%)      0 (0%)           0 (0%)         7m52s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m46s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vxgjm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xb9xs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m45s                  kube-proxy       
	  Normal  Starting                 5m41s                  kube-proxy       
	  Normal  Starting                 6m37s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m52s                  kubelet          Node functional-113333 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m52s                  kubelet          Node functional-113333 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m52s                  kubelet          Node functional-113333 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m52s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m47s                  node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	  Normal  RegisteredNode           6m34s                  node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	  Normal  Starting                 5m44s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m44s (x8 over 5m44s)  kubelet          Node functional-113333 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m44s (x8 over 5m44s)  kubelet          Node functional-113333 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m44s (x7 over 5m44s)  kubelet          Node functional-113333 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m44s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m39s                  node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 68 62 72 3f fa 08 06
	[  +0.151777] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a d8 70 38 23 e4 08 06
	[Sep29 11:14] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
	[  +2.956459] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e b8 ba d4 3b c3 08 06
	[  +0.000574] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[Sep29 11:15] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 03 82 6d ea 7e 08 06
	[  +0.000575] IPv4: martian source 10.244.0.34 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[  +0.000489] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a d2 63 ea f6 fc 08 06
	[ +12.299165] IPv4: martian source 10.244.0.35 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
	[  +0.326039] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[Sep29 11:17] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a bf 42 60 d0 c2 08 06
	[Sep29 11:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 74 32 c9 0e 09 08 06
	[Sep29 11:19] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 7e 54 87 73 ab b0 08 06
	
	
	==> etcd [a34f86dc2732] <==
	{"level":"warn","ts":"2025-09-29T11:19:28.818486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.832866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.836407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.842951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.848846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.854888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.861324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.867052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.873767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.881986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.887740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.893473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.899284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.905190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.911741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.918130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.924691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.931306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.937510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.943973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.950640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.962928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.968730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.974475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:29.025265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
	
	
	==> etcd [d15759c72f02] <==
	{"level":"warn","ts":"2025-09-29T11:18:32.787866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.794501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.800938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.811969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.818018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.823898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.867090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:19:11.921894Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:19:11.921971Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T11:19:11.922045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:19:18.923756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:19:18.923901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.923935Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T11:19:18.924071Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:19:18.924088Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:19:18.924504Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:19:18.924570Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:19:18.924583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:19:18.925137Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:19:18.925162Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:19:18.925173Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.926784Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T11:19:18.926844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.926867Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T11:19:18.926893Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:25:11 up  1:07,  0 users,  load average: 0.16, 0.63, 1.44
	Linux functional-113333 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [264a78f9985e] <==
	I0929 11:19:30.214291       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 11:19:30.376714       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0929 11:19:30.893419       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:19:30.921395       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:19:30.939238       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:19:30.945891       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:19:32.813315       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 11:19:33.113940       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:19:49.080074       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.238.203"}
	I0929 11:19:53.703390       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:19:53.811677       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.106.207"}
	I0929 11:19:54.765604       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.105.116"}
	I0929 11:19:55.660461       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.31.211"}
	I0929 11:20:07.254585       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.112.162"}
	I0929 11:20:11.760329       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:20:11.865830       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.241.54"}
	I0929 11:20:11.875766       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.72.11"}
	I0929 11:20:38.847641       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:20:48.475305       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:03.299121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:08.579804       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:23:11.462665       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:23:35.006960       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:24:22.064164       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:24:58.857235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f40ad3c8f099] <==
	I0929 11:19:32.773277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 11:19:32.775521       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:19:32.777750       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:19:32.779925       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:19:32.781116       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:19:32.783370       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:19:32.785612       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:19:32.810084       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:19:32.810106       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:19:32.810131       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:19:32.810141       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:19:32.810171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:19:32.810264       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:19:32.810284       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:19:32.810289       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:19:32.811451       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:19:32.812669       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:19:32.815410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:19:32.825597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 11:20:11.807953       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.812019       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.813334       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.816473       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.818178       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.823135       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [fe534996d388] <==
	I0929 11:18:37.926251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 11:18:37.926267       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:18:37.926311       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:18:37.926415       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:18:37.926505       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:18:37.926633       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:18:37.926641       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:18:37.928534       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:18:37.929641       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:18:37.931843       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:18:37.931894       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 11:18:37.932002       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 11:18:37.932054       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 11:18:37.932061       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 11:18:37.932071       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 11:18:37.934137       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 11:18:37.935302       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:18:37.935409       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:18:37.935477       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-113333"
	I0929 11:18:37.935514       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:18:37.935768       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:18:37.937689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:18:37.938908       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:18:37.940987       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:18:37.958320       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a13393a00a30] <==
	I0929 11:19:24.202240       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:19:24.271373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 11:19:24.272467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-113333&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [f228fbf88799] <==
	I0929 11:19:30.705538       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:19:30.759473       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:19:30.859648       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:19:30.859681       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:19:30.859762       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:19:30.883864       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:19:30.883939       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:19:30.889927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:19:30.890375       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:19:30.890413       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:19:30.892062       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:19:30.892082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:19:30.892103       1 config.go:200] "Starting service config controller"
	I0929 11:19:30.892111       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:19:30.892177       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:19:30.892235       1 config.go:309] "Starting node config controller"
	I0929 11:19:30.892257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:19:30.892236       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:19:30.992294       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:19:30.992315       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:19:30.993042       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:19:30.993055       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1153b7ac7d16] <==
	I0929 11:19:28.187267       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:19:29.407349       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:19:29.407400       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:19:29.407413       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:19:29.407423       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:19:29.422399       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:19:29.422419       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:19:29.424140       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:19:29.424168       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:19:29.425081       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:19:29.425179       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:19:29.524565       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f92f6d64d692] <==
	I0929 11:19:24.385645       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 29 11:23:51 functional-113333 kubelet[9100]: E0929 11:23:51.232038    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:23:58 functional-113333 kubelet[9100]: E0929 11:23:58.231727    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:23:58 functional-113333 kubelet[9100]: E0929 11:23:58.231796    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:24:00 functional-113333 kubelet[9100]: E0929 11:24:00.230031    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:24:03 functional-113333 kubelet[9100]: E0929 11:24:03.238934    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:24:10 functional-113333 kubelet[9100]: E0929 11:24:10.231610    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:24:11 functional-113333 kubelet[9100]: E0929 11:24:11.231318    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:24:15 functional-113333 kubelet[9100]: E0929 11:24:15.229732    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:24:15 functional-113333 kubelet[9100]: E0929 11:24:15.231702    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:24:22 functional-113333 kubelet[9100]: E0929 11:24:22.231147    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:24:24 functional-113333 kubelet[9100]: E0929 11:24:24.231637    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:24:26 functional-113333 kubelet[9100]: E0929 11:24:26.230175    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:24:26 functional-113333 kubelet[9100]: E0929 11:24:26.232053    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:24:35 functional-113333 kubelet[9100]: E0929 11:24:35.231558    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:24:35 functional-113333 kubelet[9100]: E0929 11:24:35.231652    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:24:40 functional-113333 kubelet[9100]: E0929 11:24:40.232147    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:24:41 functional-113333 kubelet[9100]: E0929 11:24:41.236866    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:24:47 functional-113333 kubelet[9100]: E0929 11:24:47.231840    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:24:47 functional-113333 kubelet[9100]: E0929 11:24:47.231960    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:24:52 functional-113333 kubelet[9100]: E0929 11:24:52.231046    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:24:53 functional-113333 kubelet[9100]: E0929 11:24:53.229576    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:25:01 functional-113333 kubelet[9100]: E0929 11:25:01.231031    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:25:02 functional-113333 kubelet[9100]: E0929 11:25:02.231899    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:25:05 functional-113333 kubelet[9100]: E0929 11:25:05.231678    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:25:07 functional-113333 kubelet[9100]: E0929 11:25:07.230139    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	
	
	==> storage-provisioner [0c1510903edf] <==
	W0929 11:24:47.225714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:49.229025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:49.237147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:51.240443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:51.245952       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:53.249580       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:53.253896       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:55.257102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:55.261991       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:57.265339       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:57.270624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:59.273759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:24:59.277556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:01.280718       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:01.285889       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:03.289135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:03.293152       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:05.296601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:05.300790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:07.303556       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:07.307164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:09.310034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:09.313853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:11.316823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:11.320574       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b3296caa44f9] <==
	I0929 11:18:45.075990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 11:18:45.082442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 11:18:45.082490       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 11:18:45.084662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:48.539506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:52.799812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:56.398213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:59.451540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.473739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.478257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:19:02.478435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 11:19:02.478502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc00fa55-b5d7-4096-ad35-b571280c955a", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488 became leader
	I0929 11:19:02.478593       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
	W0929 11:19:02.480302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.483444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:19:02.578842       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
	W0929 11:19:04.486480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:04.490606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:06.494256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:06.498085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:08.501237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:08.506582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:10.509604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:10.513944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113333 -n functional-113333
helpers_test.go:269: (dbg) Run:  kubectl --context functional-113333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1 (80.491761ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:05 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://813edc572aee3fca8ca39332981b0dc962ca018d4ff0c26f83d50d21bf947de7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:20:07 +0000
	      Finished:     Mon, 29 Sep 2025 11:20:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n7jzg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n7jzg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  5m6s  default-scheduler  Successfully assigned default/busybox-mount to functional-113333
	  Normal  Pulling    5m6s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m5s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.434s (1.434s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m5s  kubelet            Created container: mount-munger
	  Normal  Started    5m5s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-7fc8m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:07 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pwbxp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pwbxp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  5m5s                 default-scheduler  Successfully assigned default/mysql-5bb876957f-7fc8m to functional-113333
	  Normal   Pulling    2m7s (x5 over 5m5s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m7s (x5 over 5m5s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m7s (x5 over 5m5s)  kubelet            Error: ErrImagePull
	  Warning  Failed     74s (x15 over 5m4s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    10s (x20 over 5m4s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:00 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vqmng (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vqmng:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m12s                  default-scheduler  Successfully assigned default/sp-pod to functional-113333
	  Warning  Failed     5m11s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    2m24s (x5 over 5m11s)  kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     2m24s (x5 over 5m11s)  kubelet            Error: ErrImagePull
	  Warning  Failed     2m24s (x4 over 4m57s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5s (x21 over 5m11s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     5s (x21 over 5m11s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vxgjm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xb9xs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (301.89s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (368.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [243409a8-8428-4e5f-a1dc-27ee58b9e23a] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004152034s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-113333 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-113333 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-113333 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-113333 apply -f testdata/storage-provisioner/pod.yaml
I0929 11:20:00.623046  360782 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [686185b3-6518-44ab-a785-e5ad567bf76c] Pending
E0929 11:20:01.572963  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [686185b3-6518-44ab-a785-e5ad567bf76c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
functional_test_pvc_test.go:140: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 6m0s: context deadline exceeded ****
functional_test_pvc_test.go:140: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113333 -n functional-113333
functional_test_pvc_test.go:140: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2025-09-29 11:26:00.928909146 +0000 UTC m=+842.972721229
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-113333 describe po sp-pod -n default
functional_test_pvc_test.go:140: (dbg) kubectl --context functional-113333 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-113333/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:20:00 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:  10.244.0.10
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vqmng (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-vqmng:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                    From               Message
----     ------     ----                   ----               -------
Normal   Scheduled  6m                     default-scheduler  Successfully assigned default/sp-pod to functional-113333
Warning  Failed     5m59s                  kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   Pulling    3m12s (x5 over 5m59s)  kubelet            Pulling image "docker.io/nginx"
Warning  Failed     3m12s (x5 over 5m59s)  kubelet            Error: ErrImagePull
Warning  Failed     3m12s (x4 over 5m45s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Normal   BackOff    53s (x21 over 5m59s)   kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     53s (x21 over 5m59s)   kubelet            Error: ImagePullBackOff
functional_test_pvc_test.go:140: (dbg) Run:  kubectl --context functional-113333 logs sp-pod -n default
functional_test_pvc_test.go:140: (dbg) Non-zero exit: kubectl --context functional-113333 logs sp-pod -n default: exit status 1 (73.620316ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:140: kubectl --context functional-113333 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:141: failed waiting for pvctest pod : test=storage-provisioner within 6m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-113333
helpers_test.go:243: (dbg) docker inspect functional-113333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
	        "Created": "2025-09-29T11:17:04.817558805Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 391650,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:17:04.849941498Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hosts",
	        "LogPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8-json.log",
	        "Name": "/functional-113333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-113333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-113333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
	                "LowerDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-113333",
	                "Source": "/var/lib/docker/volumes/functional-113333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-113333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-113333",
	                "name.minikube.sigs.k8s.io": "functional-113333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a211ba94c8850961796fb0b95cdec4d53ee08039011b058eabdfa970d2029d85",
	            "SandboxKey": "/var/run/docker/netns/a211ba94c885",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-113333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:42:67:f3:c0:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90b72701a62f4e5c7a3409fa4bb2ab5e9e99c71d1e536f1b56e4a3c618dc646d",
	                    "EndpointID": "049ef9c51ec99d3d8642aca3df3c234d511cfe97279244292d3363d54e2d7fca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-113333",
	                        "0e969f65a5f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-113333 -n functional-113333
helpers_test.go:252: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 logs -n 25
helpers_test.go:260: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-113333 ssh stat /mount-9p/created-by-pod                                                                               │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh sudo umount -f /mount-9p                                                                                    │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdspecific-port3676981704/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh -- ls -la /mount-9p                                                                                         │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh sudo umount -f /mount-9p                                                                                    │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount2 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount3 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount1                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount1 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount1                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh findmnt -T /mount2                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh findmnt -T /mount3                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ mount          │ -p functional-113333 --kill=true                                                                                                  │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format short --alsologtostderr                                                                       │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format yaml --alsologtostderr                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh pgrep buildkitd                                                                                             │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ image          │ functional-113333 image build -t localhost/my-image:functional-113333 testdata/build --alsologtostderr                            │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls                                                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format json --alsologtostderr                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format table --alsologtostderr                                                                       │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:20:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:20:04.491921  409081 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:04.492007  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492014  409081 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:04.492018  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492320  409081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:20:04.492755  409081 out.go:368] Setting JSON to false
	I0929 11:20:04.493767  409081 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3748,"bootTime":1759141056,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:20:04.493856  409081 start.go:140] virtualization: kvm guest
	I0929 11:20:04.495673  409081 out.go:179] * [functional-113333] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:20:04.496907  409081 notify.go:220] Checking for updates...
	I0929 11:20:04.496966  409081 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:20:04.498242  409081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:04.499707  409081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 11:20:04.501035  409081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 11:20:04.505457  409081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:20:04.506863  409081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:20:04.509025  409081 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:20:04.509717  409081 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:04.536233  409081 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:20:04.536391  409081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:04.596439  409081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-29 11:20:04.586118728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:20:04.596617  409081 docker.go:318] overlay module found
	I0929 11:20:04.598520  409081 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 11:20:04.599774  409081 start.go:304] selected driver: docker
	I0929 11:20:04.599789  409081 start.go:924] validating driver "docker" against &{Name:functional-113333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-113333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:20:04.599895  409081 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:20:04.603063  409081 out.go:203] 
	W0929 11:20:04.604206  409081 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:20:04.605379  409081 out.go:203] 
	
	
	==> Docker <==
	Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.327949749Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:38 functional-113333 dockerd[6858]: time="2025-09-29T11:20:38.320400297Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:46 functional-113333 dockerd[6858]: time="2025-09-29T11:20:46.319883738Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.245713777Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.272860343Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.247335940Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.277658815Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:25 functional-113333 dockerd[6858]: time="2025-09-29T11:21:25.325203091Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.249257351Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.280402159Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.248774060Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.278362990Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:40 functional-113333 dockerd[6858]: time="2025-09-29T11:21:40.317234122Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:22:48 functional-113333 dockerd[6858]: time="2025-09-29T11:22:48.354941240Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.250100634Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.278209097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:23:05 functional-113333 dockerd[6858]: time="2025-09-29T11:23:05.329781423Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.250410392Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.279213587Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:25:32 functional-113333 dockerd[6858]: time="2025-09-29T11:25:32.337386867Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:25:44 functional-113333 dockerd[6858]: time="2025-09-29T11:25:44.248273197Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:25:44 functional-113333 dockerd[6858]: time="2025-09-29T11:25:44.276180177Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:25:51 functional-113333 dockerd[6858]: time="2025-09-29T11:25:51.322181273Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:26:01 functional-113333 dockerd[6858]: time="2025-09-29T11:26:01.250752374Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:26:01 functional-113333 dockerd[6858]: time="2025-09-29T11:26:01.281670701Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	813edc572aee3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   5 minutes ago       Exited              mount-munger              0                   756b234fa6e2a       busybox-mount
	797ed74fc1800       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   69f30c3f27ac7       hello-node-connect-7d85dfc575-pvq4m
	f19913170bea1       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         6 minutes ago       Running             nginx                     0                   74ea6477a50a8       nginx-svc
	9233722b13058       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           6 minutes ago       Running             echo-server               0                   5174dba697c69       hello-node-75c85bcc94-524nr
	f228fbf887997       df0860106674d                                                                                         6 minutes ago       Running             kube-proxy                3                   d14826ecc1e95       kube-proxy-kp4d8
	66ddd141ec1f6       52546a367cc9e                                                                                         6 minutes ago       Running             coredns                   2                   0daa4d953b658       coredns-66bc5c9577-ndt25
	0c1510903edfc       6e38f40d628db                                                                                         6 minutes ago       Running             storage-provisioner       3                   e14d4154c78df       storage-provisioner
	a34f86dc27328       5f1f5298c888d                                                                                         6 minutes ago       Running             etcd                      2                   3ec9b5756cc18       etcd-functional-113333
	264a78f9985e9       90550c43ad2bc                                                                                         6 minutes ago       Running             kube-apiserver            0                   cac40828278ac       kube-apiserver-functional-113333
	1153b7ac7d169       46169d968e920                                                                                         6 minutes ago       Running             kube-scheduler            3                   4466a2147b50c       kube-scheduler-functional-113333
	f40ad3c8f099f       a0af72f2ec6d6                                                                                         6 minutes ago       Running             kube-controller-manager   2                   42f7aadb66137       kube-controller-manager-functional-113333
	f92f6d64d6929       46169d968e920                                                                                         6 minutes ago       Exited              kube-scheduler            2                   ba17dfc161521       kube-scheduler-functional-113333
	a13393a00a30d       df0860106674d                                                                                         6 minutes ago       Exited              kube-proxy                2                   871e0c1c685a0       kube-proxy-kp4d8
	b3296caa44f98       6e38f40d628db                                                                                         7 minutes ago       Exited              storage-provisioner       2                   3ae050bca60a4       storage-provisioner
	ebb584477fb59       52546a367cc9e                                                                                         7 minutes ago       Exited              coredns                   1                   c858f76b2e6af       coredns-66bc5c9577-ndt25
	fe534996d3885       a0af72f2ec6d6                                                                                         7 minutes ago       Exited              kube-controller-manager   1                   26caa1f2477bb       kube-controller-manager-functional-113333
	d15759c72f024       5f1f5298c888d                                                                                         7 minutes ago       Exited              etcd                      1                   daea5fbf20513       etcd-functional-113333
	
	
	==> coredns [66ddd141ec1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47174 - 22489 "HINFO IN 8566101316675011462.5533812213724835804. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016422216s
	
	
	==> coredns [ebb584477fb5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49493 - 53604 "HINFO IN 1223955324215989705.3505866021153624538. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.425693464s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-113333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-113333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=functional-113333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_17_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:17:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-113333
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:25:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:20:30 +0000   Mon, 29 Sep 2025 11:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-113333
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2c1ed2445d24531beaede9409d240bc
	  System UUID:                0575d937-ba65-482d-bfc6-2fea38fe2d9c
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-524nr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m9s
	  default                     hello-node-connect-7d85dfc575-pvq4m           0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m7s
	  default                     mysql-5bb876957f-7fc8m                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     5m55s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m8s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m2s
	  kube-system                 coredns-66bc5c9577-ndt25                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     8m37s
	  kube-system                 etcd-functional-113333                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8m43s
	  kube-system                 kube-apiserver-functional-113333              250m (3%)     0 (0%)      0 (0%)           0 (0%)         6m31s
	  kube-system                 kube-controller-manager-functional-113333     200m (2%)     0 (0%)      0 (0%)           0 (0%)         8m44s
	  kube-system                 kube-proxy-kp4d8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kube-system                 kube-scheduler-functional-113333              100m (1%)     0 (0%)      0 (0%)           0 (0%)         8m43s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m37s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vxgjm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xb9xs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m35s                  kube-proxy       
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 7m27s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  8m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m43s                  kubelet          Node functional-113333 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m43s                  kubelet          Node functional-113333 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m43s                  kubelet          Node functional-113333 status is now: NodeHasSufficientPID
	  Normal  Starting                 8m43s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           8m38s                  node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	  Normal  RegisteredNode           7m25s                  node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	  Normal  Starting                 6m35s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m35s (x8 over 6m35s)  kubelet          Node functional-113333 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m35s (x8 over 6m35s)  kubelet          Node functional-113333 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m35s (x7 over 6m35s)  kubelet          Node functional-113333 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m35s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m30s                  node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 68 62 72 3f fa 08 06
	[  +0.151777] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a d8 70 38 23 e4 08 06
	[Sep29 11:14] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
	[  +2.956459] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e b8 ba d4 3b c3 08 06
	[  +0.000574] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[Sep29 11:15] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 03 82 6d ea 7e 08 06
	[  +0.000575] IPv4: martian source 10.244.0.34 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[  +0.000489] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a d2 63 ea f6 fc 08 06
	[ +12.299165] IPv4: martian source 10.244.0.35 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
	[  +0.326039] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[Sep29 11:17] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a bf 42 60 d0 c2 08 06
	[Sep29 11:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 74 32 c9 0e 09 08 06
	[Sep29 11:19] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 7e 54 87 73 ab b0 08 06
	
	
	==> etcd [a34f86dc2732] <==
	{"level":"warn","ts":"2025-09-29T11:19:28.818486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.832866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37150","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.836407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.842951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.848846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.854888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.861324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.867052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.873767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.881986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.887740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.893473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.899284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.905190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.911741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.918130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.924691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.931306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.937510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.943973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.950640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.962928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.968730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.974475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:29.025265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
	
	
	==> etcd [d15759c72f02] <==
	{"level":"warn","ts":"2025-09-29T11:18:32.787866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.794501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.800938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.811969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.818018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.823898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.867090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:19:11.921894Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:19:11.921971Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T11:19:11.922045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:19:18.923756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:19:18.923901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.923935Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T11:19:18.924071Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:19:18.924088Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:19:18.924504Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:19:18.924570Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:19:18.924583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:19:18.925137Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:19:18.925162Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:19:18.925173Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.926784Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T11:19:18.926844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.926867Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T11:19:18.926893Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:26:02 up  1:08,  0 users,  load average: 0.18, 0.56, 1.37
	Linux functional-113333 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [264a78f9985e] <==
	I0929 11:19:30.893419       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 11:19:30.921395       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 11:19:30.939238       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 11:19:30.945891       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 11:19:32.813315       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 11:19:33.113940       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 11:19:49.080074       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.98.238.203"}
	I0929 11:19:53.703390       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:19:53.811677       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.106.207"}
	I0929 11:19:54.765604       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.105.116"}
	I0929 11:19:55.660461       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.31.211"}
	I0929 11:20:07.254585       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.112.162"}
	I0929 11:20:11.760329       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:20:11.865830       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.241.54"}
	I0929 11:20:11.875766       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.72.11"}
	I0929 11:20:38.847641       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:20:48.475305       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:03.299121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:08.579804       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:23:11.462665       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:23:35.006960       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:24:22.064164       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:24:58.857235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:25:36.822054       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:26:00.075558       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f40ad3c8f099] <==
	I0929 11:19:32.773277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 11:19:32.775521       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:19:32.777750       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:19:32.779925       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:19:32.781116       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:19:32.783370       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:19:32.785612       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:19:32.810084       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:19:32.810106       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:19:32.810131       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:19:32.810141       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:19:32.810171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:19:32.810264       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:19:32.810284       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:19:32.810289       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:19:32.811451       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:19:32.812669       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:19:32.815410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:19:32.825597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 11:20:11.807953       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.812019       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.813334       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.816473       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.818178       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.823135       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [fe534996d388] <==
	I0929 11:18:37.926251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 11:18:37.926267       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:18:37.926311       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:18:37.926415       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:18:37.926505       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:18:37.926633       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:18:37.926641       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:18:37.928534       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:18:37.929641       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:18:37.931843       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:18:37.931894       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 11:18:37.932002       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 11:18:37.932054       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 11:18:37.932061       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 11:18:37.932071       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 11:18:37.934137       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 11:18:37.935302       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:18:37.935409       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:18:37.935477       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-113333"
	I0929 11:18:37.935514       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:18:37.935768       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:18:37.937689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:18:37.938908       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:18:37.940987       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:18:37.958320       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a13393a00a30] <==
	I0929 11:19:24.202240       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:19:24.271373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 11:19:24.272467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-113333&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [f228fbf88799] <==
	I0929 11:19:30.705538       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:19:30.759473       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:19:30.859648       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:19:30.859681       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:19:30.859762       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:19:30.883864       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:19:30.883939       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:19:30.889927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:19:30.890375       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:19:30.890413       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:19:30.892062       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:19:30.892082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:19:30.892103       1 config.go:200] "Starting service config controller"
	I0929 11:19:30.892111       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:19:30.892177       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:19:30.892235       1 config.go:309] "Starting node config controller"
	I0929 11:19:30.892257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:19:30.892236       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:19:30.992294       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:19:30.992315       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:19:30.993042       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:19:30.993055       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1153b7ac7d16] <==
	I0929 11:19:28.187267       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:19:29.407349       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:19:29.407400       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:19:29.407413       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:19:29.407423       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:19:29.422399       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:19:29.422419       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:19:29.424140       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:19:29.424168       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:19:29.425081       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:19:29.425179       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:19:29.524565       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f92f6d64d692] <==
	I0929 11:19:24.385645       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 29 11:25:25 functional-113333 kubelet[9100]: E0929 11:25:25.232144    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:25:28 functional-113333 kubelet[9100]: E0929 11:25:28.231294    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:25:32 functional-113333 kubelet[9100]: E0929 11:25:32.231332    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:25:32 functional-113333 kubelet[9100]: E0929 11:25:32.340141    9100 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:25:32 functional-113333 kubelet[9100]: E0929 11:25:32.340205    9100 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Sep 29 11:25:32 functional-113333 kubelet[9100]: E0929 11:25:32.340294    9100 kuberuntime_manager.go:1449] "Unhandled Error" err="container myfrontend start failed in pod sp-pod_default(686185b3-6518-44ab-a785-e5ad567bf76c): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:25:32 functional-113333 kubelet[9100]: E0929 11:25:32.340330    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:25:36 functional-113333 kubelet[9100]: E0929 11:25:36.232146    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:25:40 functional-113333 kubelet[9100]: E0929 11:25:40.230967    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:25:44 functional-113333 kubelet[9100]: E0929 11:25:44.229243    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:25:44 functional-113333 kubelet[9100]: E0929 11:25:44.278704    9100 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:25:44 functional-113333 kubelet[9100]: E0929 11:25:44.278771    9100 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:25:44 functional-113333 kubelet[9100]: E0929 11:25:44.278908    9100 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-xb9xs_kubernetes-dashboard(65959828-b43c-46d9-aaf1-caea5d07f5dd): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:25:44 functional-113333 kubelet[9100]: E0929 11:25:44.278954    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:25:47 functional-113333 kubelet[9100]: E0929 11:25:47.237641    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:25:51 functional-113333 kubelet[9100]: E0929 11:25:51.324453    9100 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 11:25:51 functional-113333 kubelet[9100]: E0929 11:25:51.324512    9100 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/mysql:5.7"
	Sep 29 11:25:51 functional-113333 kubelet[9100]: E0929 11:25:51.324595    9100 kuberuntime_manager.go:1449] "Unhandled Error" err="container mysql start failed in pod mysql-5bb876957f-7fc8m_default(15138e7a-750d-441a-9416-b3684980644f): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:25:51 functional-113333 kubelet[9100]: E0929 11:25:51.324631    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:25:55 functional-113333 kubelet[9100]: E0929 11:25:55.231905    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:25:59 functional-113333 kubelet[9100]: E0929 11:25:59.229761    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:26:01 functional-113333 kubelet[9100]: E0929 11:26:01.284339    9100 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:26:01 functional-113333 kubelet[9100]: E0929 11:26:01.284412    9100 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:26:01 functional-113333 kubelet[9100]: E0929 11:26:01.284517    9100 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm_kubernetes-dashboard(367d1ac4-a750-4f02-9e98-a40f80485812): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 11:26:01 functional-113333 kubelet[9100]: E0929 11:26:01.284557    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	
	
	==> storage-provisioner [0c1510903edf] <==
	W0929 11:25:37.421017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:39.424313       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:39.429347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:41.432354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:41.436277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:43.439683       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:43.444790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:45.448503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:45.452442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:47.455210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:47.459933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:49.462636       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:49.466544       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:51.470160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:51.473846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:53.476667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:53.481795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:55.484464       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:55.488610       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:57.491943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:57.495787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:59.498476       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:25:59.502183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:26:01.505489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:26:01.511261       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b3296caa44f9] <==
	I0929 11:18:45.075990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 11:18:45.082442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 11:18:45.082490       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 11:18:45.084662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:48.539506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:52.799812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:56.398213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:59.451540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.473739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.478257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:19:02.478435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 11:19:02.478502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc00fa55-b5d7-4096-ad35-b571280c955a", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488 became leader
	I0929 11:19:02.478593       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
	W0929 11:19:02.480302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.483444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:19:02.578842       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
	W0929 11:19:04.486480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:04.490606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:06.494256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:06.498085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:08.501237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:08.506582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:10.509604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:10.513944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113333 -n functional-113333
helpers_test.go:269: (dbg) Run:  kubectl --context functional-113333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1 (80.56513ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:05 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://813edc572aee3fca8ca39332981b0dc962ca018d4ff0c26f83d50d21bf947de7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:20:07 +0000
	      Finished:     Mon, 29 Sep 2025 11:20:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n7jzg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n7jzg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m56s  default-scheduler  Successfully assigned default/busybox-mount to functional-113333
	  Normal  Pulling    5m56s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     5m55s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.434s (1.434s including waiting). Image size: 4403845 bytes.
	  Normal  Created    5m55s  kubelet            Created container: mount-munger
	  Normal  Started    5m55s  kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-7fc8m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:07 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pwbxp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pwbxp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  5m55s                  default-scheduler  Successfully assigned default/mysql-5bb876957f-7fc8m to functional-113333
	  Normal   Pulling    2m57s (x5 over 5m55s)  kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     2m57s (x5 over 5m55s)  kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     2m57s (x5 over 5m55s)  kubelet            Error: ErrImagePull
	  Normal   BackOff    47s (x21 over 5m54s)   kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     47s (x21 over 5m54s)   kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:00 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vqmng (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vqmng:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  6m2s                   default-scheduler  Successfully assigned default/sp-pod to functional-113333
	  Warning  Failed     6m1s                   kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    3m14s (x5 over 6m1s)   kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     3m14s (x5 over 6m1s)   kubelet            Error: ErrImagePull
	  Warning  Failed     3m14s (x4 over 5m47s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    55s (x21 over 6m1s)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     55s (x21 over 6m1s)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vxgjm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xb9xs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1
E0929 11:29:20.587966  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (368.59s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (602.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-113333 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-7fc8m" [15138e7a-750d-441a-9416-b3684980644f] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:337: TestFunctional/parallel/MySQL: WARNING: pod list for "default" "app=mysql" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1804: ***** TestFunctional/parallel/MySQL: pod "app=mysql" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1804: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113333 -n functional-113333
functional_test.go:1804: TestFunctional/parallel/MySQL: showing logs for failed pods as of 2025-09-29 11:30:07.608627377 +0000 UTC m=+1089.652439470
functional_test.go:1804: (dbg) Run:  kubectl --context functional-113333 describe po mysql-5bb876957f-7fc8m -n default
functional_test.go:1804: (dbg) kubectl --context functional-113333 describe po mysql-5bb876957f-7fc8m -n default:
Name:             mysql-5bb876957f-7fc8m
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-113333/192.168.49.2
Start Time:       Mon, 29 Sep 2025 11:20:07 +0000
Labels:           app=mysql
pod-template-hash=5bb876957f
Annotations:      <none>
Status:           Pending
IP:               10.244.0.12
IPs:
IP:           10.244.0.12
Controlled By:  ReplicaSet/mysql-5bb876957f
Containers:
mysql:
Container ID:   
Image:          docker.io/mysql:5.7
Image ID:       
Port:           3306/TCP (mysql)
Host Port:      0/TCP (mysql)
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Limits:
cpu:     700m
memory:  700Mi
Requests:
cpu:     600m
memory:  512Mi
Environment:
MYSQL_ROOT_PASSWORD:  password
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pwbxp (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-pwbxp:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/mysql-5bb876957f-7fc8m to functional-113333
Normal   Pulling    7m2s (x5 over 10m)      kubelet            Pulling image "docker.io/mysql:5.7"
Warning  Failed     7m2s (x5 over 10m)      kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m2s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m52s (x21 over 9m59s)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
Warning  Failed     4m52s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1804: (dbg) Run:  kubectl --context functional-113333 logs mysql-5bb876957f-7fc8m -n default
functional_test.go:1804: (dbg) Non-zero exit: kubectl --context functional-113333 logs mysql-5bb876957f-7fc8m -n default: exit status 1 (67.622668ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "mysql" in pod "mysql-5bb876957f-7fc8m" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1804: kubectl --context functional-113333 logs mysql-5bb876957f-7fc8m -n default: exit status 1
functional_test.go:1806: failed waiting for mysql pod: app=mysql within 10m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/MySQL]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/MySQL]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-113333
helpers_test.go:243: (dbg) docker inspect functional-113333:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
	        "Created": "2025-09-29T11:17:04.817558805Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 391650,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T11:17:04.849941498Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/hosts",
	        "LogPath": "/var/lib/docker/containers/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8/0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8-json.log",
	        "Name": "/functional-113333",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-113333:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-113333",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e969f65a5f53fc9264ed0e6040a8b0887260fcb65421b1fe7c9b63e9f227ba8",
	                "LowerDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8cc101409d56979bc21ca10fbfb120097217eddf7a810fdf2e8f2e3e78d516cb/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-113333",
	                "Source": "/var/lib/docker/volumes/functional-113333/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-113333",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-113333",
	                "name.minikube.sigs.k8s.io": "functional-113333",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a211ba94c8850961796fb0b95cdec4d53ee08039011b058eabdfa970d2029d85",
	            "SandboxKey": "/var/run/docker/netns/a211ba94c885",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33148"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33149"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33152"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33150"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33151"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-113333": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "b6:42:67:f3:c0:76",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "90b72701a62f4e5c7a3409fa4bb2ab5e9e99c71d1e536f1b56e4a3c618dc646d",
	                    "EndpointID": "049ef9c51ec99d3d8642aca3df3c234d511cfe97279244292d3363d54e2d7fca",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-113333",
	                        "0e969f65a5f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-113333 -n functional-113333
helpers_test.go:252: <<< TestFunctional/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/MySQL]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 logs -n 25
helpers_test.go:260: TestFunctional/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-113333 ssh stat /mount-9p/created-by-pod                                                                               │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh sudo umount -f /mount-9p                                                                                    │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdspecific-port3676981704/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh -- ls -la /mount-9p                                                                                         │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh sudo umount -f /mount-9p                                                                                    │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount2 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount3 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount1                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ mount          │ -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount1 --alsologtostderr -v=1                 │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ ssh            │ functional-113333 ssh findmnt -T /mount1                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh findmnt -T /mount2                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh findmnt -T /mount3                                                                                          │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ mount          │ -p functional-113333 --kill=true                                                                                                  │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ update-context │ functional-113333 update-context --alsologtostderr -v=2                                                                           │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format short --alsologtostderr                                                                       │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format yaml --alsologtostderr                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ ssh            │ functional-113333 ssh pgrep buildkitd                                                                                             │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │                     │
	│ image          │ functional-113333 image build -t localhost/my-image:functional-113333 testdata/build --alsologtostderr                            │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls                                                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format json --alsologtostderr                                                                        │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	│ image          │ functional-113333 image ls --format table --alsologtostderr                                                                       │ functional-113333 │ jenkins │ v1.37.0 │ 29 Sep 25 11:20 UTC │ 29 Sep 25 11:20 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:20:04
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:20:04.491921  409081 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:04.492007  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492014  409081 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:04.492018  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492320  409081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:20:04.492755  409081 out.go:368] Setting JSON to false
	I0929 11:20:04.493767  409081 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3748,"bootTime":1759141056,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:20:04.493856  409081 start.go:140] virtualization: kvm guest
	I0929 11:20:04.495673  409081 out.go:179] * [functional-113333] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:20:04.496907  409081 notify.go:220] Checking for updates...
	I0929 11:20:04.496966  409081 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:20:04.498242  409081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:04.499707  409081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 11:20:04.501035  409081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 11:20:04.505457  409081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:20:04.506863  409081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:20:04.509025  409081 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:20:04.509717  409081 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:04.536233  409081 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:20:04.536391  409081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:04.596439  409081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-29 11:20:04.586118728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:20:04.596617  409081 docker.go:318] overlay module found
	I0929 11:20:04.598520  409081 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 11:20:04.599774  409081 start.go:304] selected driver: docker
	I0929 11:20:04.599789  409081 start.go:924] validating driver "docker" against &{Name:functional-113333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-113333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:20:04.599895  409081 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:20:04.603063  409081 out.go:203] 
	W0929 11:20:04.604206  409081 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:20:04.605379  409081 out.go:203] 
	
	
	==> Docker <==
	Sep 29 11:20:24 functional-113333 dockerd[6858]: time="2025-09-29T11:20:24.327949749Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:38 functional-113333 dockerd[6858]: time="2025-09-29T11:20:38.320400297Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:46 functional-113333 dockerd[6858]: time="2025-09-29T11:20:46.319883738Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.245713777Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:20:48 functional-113333 dockerd[6858]: time="2025-09-29T11:20:48.272860343Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.247335940Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:20:49 functional-113333 dockerd[6858]: time="2025-09-29T11:20:49.277658815Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:25 functional-113333 dockerd[6858]: time="2025-09-29T11:21:25.325203091Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.249257351Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:21:29 functional-113333 dockerd[6858]: time="2025-09-29T11:21:29.280402159Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.248774060Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:21:38 functional-113333 dockerd[6858]: time="2025-09-29T11:21:38.278362990Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:21:40 functional-113333 dockerd[6858]: time="2025-09-29T11:21:40.317234122Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:22:48 functional-113333 dockerd[6858]: time="2025-09-29T11:22:48.354941240Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.250100634Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:22:59 functional-113333 dockerd[6858]: time="2025-09-29T11:22:59.278209097Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:23:05 functional-113333 dockerd[6858]: time="2025-09-29T11:23:05.329781423Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.250410392Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:23:07 functional-113333 dockerd[6858]: time="2025-09-29T11:23:07.279213587Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:25:32 functional-113333 dockerd[6858]: time="2025-09-29T11:25:32.337386867Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:25:44 functional-113333 dockerd[6858]: time="2025-09-29T11:25:44.248273197Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 11:25:44 functional-113333 dockerd[6858]: time="2025-09-29T11:25:44.276180177Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:25:51 functional-113333 dockerd[6858]: time="2025-09-29T11:25:51.322181273Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 11:26:01 functional-113333 dockerd[6858]: time="2025-09-29T11:26:01.250752374Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 11:26:01 functional-113333 dockerd[6858]: time="2025-09-29T11:26:01.281670701Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	813edc572aee3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              mount-munger              0                   756b234fa6e2a       busybox-mount
	797ed74fc1800       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   69f30c3f27ac7       hello-node-connect-7d85dfc575-pvq4m
	f19913170bea1       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         10 minutes ago      Running             nginx                     0                   74ea6477a50a8       nginx-svc
	9233722b13058       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           10 minutes ago      Running             echo-server               0                   5174dba697c69       hello-node-75c85bcc94-524nr
	f228fbf887997       df0860106674d                                                                                         10 minutes ago      Running             kube-proxy                3                   d14826ecc1e95       kube-proxy-kp4d8
	66ddd141ec1f6       52546a367cc9e                                                                                         10 minutes ago      Running             coredns                   2                   0daa4d953b658       coredns-66bc5c9577-ndt25
	0c1510903edfc       6e38f40d628db                                                                                         10 minutes ago      Running             storage-provisioner       3                   e14d4154c78df       storage-provisioner
	a34f86dc27328       5f1f5298c888d                                                                                         10 minutes ago      Running             etcd                      2                   3ec9b5756cc18       etcd-functional-113333
	264a78f9985e9       90550c43ad2bc                                                                                         10 minutes ago      Running             kube-apiserver            0                   cac40828278ac       kube-apiserver-functional-113333
	1153b7ac7d169       46169d968e920                                                                                         10 minutes ago      Running             kube-scheduler            3                   4466a2147b50c       kube-scheduler-functional-113333
	f40ad3c8f099f       a0af72f2ec6d6                                                                                         10 minutes ago      Running             kube-controller-manager   2                   42f7aadb66137       kube-controller-manager-functional-113333
	f92f6d64d6929       46169d968e920                                                                                         10 minutes ago      Exited              kube-scheduler            2                   ba17dfc161521       kube-scheduler-functional-113333
	a13393a00a30d       df0860106674d                                                                                         10 minutes ago      Exited              kube-proxy                2                   871e0c1c685a0       kube-proxy-kp4d8
	b3296caa44f98       6e38f40d628db                                                                                         11 minutes ago      Exited              storage-provisioner       2                   3ae050bca60a4       storage-provisioner
	ebb584477fb59       52546a367cc9e                                                                                         11 minutes ago      Exited              coredns                   1                   c858f76b2e6af       coredns-66bc5c9577-ndt25
	fe534996d3885       a0af72f2ec6d6                                                                                         11 minutes ago      Exited              kube-controller-manager   1                   26caa1f2477bb       kube-controller-manager-functional-113333
	d15759c72f024       5f1f5298c888d                                                                                         11 minutes ago      Exited              etcd                      1                   daea5fbf20513       etcd-functional-113333
	
	
	==> coredns [66ddd141ec1f] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47174 - 22489 "HINFO IN 8566101316675011462.5533812213724835804. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016422216s
	
	
	==> coredns [ebb584477fb5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49493 - 53604 "HINFO IN 1223955324215989705.3505866021153624538. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.425693464s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-113333
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-113333
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=functional-113333
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T11_17_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 11:17:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-113333
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 11:30:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 11:27:39 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 11:27:39 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 11:27:39 +0000   Mon, 29 Sep 2025 11:17:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 11:27:39 +0000   Mon, 29 Sep 2025 11:17:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-113333
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 b2c1ed2445d24531beaede9409d240bc
	  System UUID:                0575d937-ba65-482d-bfc6-2fea38fe2d9c
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-524nr                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-pvq4m           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-7fc8m                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-ndt25                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-113333                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kube-apiserver-functional-113333              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-113333     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-kp4d8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-113333              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-vxgjm    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xb9xs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1350m (16%)  700m (8%)
	  memory             682Mi (2%)   870Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 11m                kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-113333 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-113333 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-113333 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	  Normal  RegisteredNode           11m                node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-113333 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-113333 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node functional-113333 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           10m                node-controller  Node functional-113333 event: Registered Node functional-113333 in Controller
	
	
	==> dmesg <==
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be 68 62 72 3f fa 08 06
	[  +0.151777] IPv4: martian source 10.244.0.1 from 10.244.0.17, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a d8 70 38 23 e4 08 06
	[Sep29 11:14] IPv4: martian source 10.244.0.1 from 10.244.0.26, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
	[  +2.956459] IPv4: martian source 10.244.0.1 from 10.244.0.27, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 3e b8 ba d4 3b c3 08 06
	[  +0.000574] IPv4: martian source 10.244.0.27 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[Sep29 11:15] IPv4: martian source 10.244.0.1 from 10.244.0.34, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 1e 03 82 6d ea 7e 08 06
	[  +0.000575] IPv4: martian source 10.244.0.34 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[  +0.000489] IPv4: martian source 10.244.0.34 from 10.244.0.7, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 5a d2 63 ea f6 fc 08 06
	[ +12.299165] IPv4: martian source 10.244.0.35 from 10.244.0.26, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 02 90 0b cb ca ea 08 06
	[  +0.326039] IPv4: martian source 10.244.0.26 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 12 a3 f8 27 02 13 08 06
	[Sep29 11:17] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 3a bf 42 60 d0 c2 08 06
	[Sep29 11:18] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 74 32 c9 0e 09 08 06
	[Sep29 11:19] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000016] ll header: 00000000: ff ff ff ff ff ff 7e 54 87 73 ab b0 08 06
	
	
	==> etcd [a34f86dc2732] <==
	{"level":"warn","ts":"2025-09-29T11:19:28.842951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.848846Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.854888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.861324Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.867052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.873767Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.881986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.887740Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.893473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.899284Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.905190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37364","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.911741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.918130Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.924691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.931306Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.937510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.943973Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.950640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.962928Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.968730Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:28.974475Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:19:29.025265Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37582","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:29:28.551915Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1312}
	{"level":"info","ts":"2025-09-29T11:29:28.571971Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1312,"took":"19.613145ms","hash":3565127243,"current-db-size-bytes":3801088,"current-db-size":"3.8 MB","current-db-size-in-use-bytes":1896448,"current-db-size-in-use":"1.9 MB"}
	{"level":"info","ts":"2025-09-29T11:29:28.572014Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3565127243,"revision":1312,"compact-revision":-1}
	
	
	==> etcd [d15759c72f02] <==
	{"level":"warn","ts":"2025-09-29T11:18:32.787866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.794501Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.800938Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.811969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.818018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.823898Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33094","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T11:18:32.867090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33122","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T11:19:11.921894Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T11:19:11.921971Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T11:19:11.922045Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:19:18.923756Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T11:19:18.923901Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.923935Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T11:19:18.924071Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T11:19:18.924088Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T11:19:18.924504Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:19:18.924570Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:19:18.924583Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T11:19:18.925137Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T11:19:18.925162Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T11:19:18.925173Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.926784Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T11:19:18.926844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T11:19:18.926867Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T11:19:18.926893Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-113333","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:30:08 up  1:12,  0 users,  load average: 0.10, 0.33, 1.09
	Linux functional-113333 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [264a78f9985e] <==
	I0929 11:19:53.703390       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 11:19:53.811677       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.98.106.207"}
	I0929 11:19:54.765604       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.105.116"}
	I0929 11:19:55.660461       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.108.31.211"}
	I0929 11:20:07.254585       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.111.112.162"}
	I0929 11:20:11.760329       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 11:20:11.865830       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.241.54"}
	I0929 11:20:11.875766       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.72.11"}
	I0929 11:20:38.847641       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:20:48.475305       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:03.299121       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:22:08.579804       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:23:11.462665       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:23:35.006960       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:24:22.064164       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:24:58.857235       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:25:36.822054       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:26:00.075558       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:27:01.406821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:27:14.139185       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:28:26.412826       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:28:41.521385       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:29:27.987892       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 11:29:29.412023       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 11:29:48.857821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [f40ad3c8f099] <==
	I0929 11:19:32.773277       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 11:19:32.775521       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 11:19:32.777750       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:19:32.779925       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 11:19:32.781116       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 11:19:32.783370       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 11:19:32.785612       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:19:32.810084       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:19:32.810106       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 11:19:32.810131       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 11:19:32.810141       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 11:19:32.810171       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 11:19:32.810264       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 11:19:32.810284       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 11:19:32.810289       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 11:19:32.811451       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:19:32.812669       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:19:32.815410       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:19:32.825597       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 11:20:11.807953       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.812019       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.813334       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.816473       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.818178       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 11:20:11.823135       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [fe534996d388] <==
	I0929 11:18:37.926251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 11:18:37.926267       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 11:18:37.926311       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 11:18:37.926415       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 11:18:37.926505       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 11:18:37.926633       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 11:18:37.926641       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 11:18:37.928534       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 11:18:37.929641       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 11:18:37.931843       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 11:18:37.931894       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 11:18:37.932002       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 11:18:37.932054       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 11:18:37.932061       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 11:18:37.932071       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 11:18:37.934137       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 11:18:37.935302       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 11:18:37.935409       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 11:18:37.935477       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-113333"
	I0929 11:18:37.935514       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 11:18:37.935768       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 11:18:37.937689       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 11:18:37.938908       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 11:18:37.940987       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 11:18:37.958320       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [a13393a00a30] <==
	I0929 11:19:24.202240       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:19:24.271373       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0929 11:19:24.272467       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-113333&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	
	
	==> kube-proxy [f228fbf88799] <==
	I0929 11:19:30.705538       1 server_linux.go:53] "Using iptables proxy"
	I0929 11:19:30.759473       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 11:19:30.859648       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 11:19:30.859681       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 11:19:30.859762       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 11:19:30.883864       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 11:19:30.883939       1 server_linux.go:132] "Using iptables Proxier"
	I0929 11:19:30.889927       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 11:19:30.890375       1 server.go:527] "Version info" version="v1.34.0"
	I0929 11:19:30.890413       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:19:30.892062       1 config.go:106] "Starting endpoint slice config controller"
	I0929 11:19:30.892082       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 11:19:30.892103       1 config.go:200] "Starting service config controller"
	I0929 11:19:30.892111       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 11:19:30.892177       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 11:19:30.892235       1 config.go:309] "Starting node config controller"
	I0929 11:19:30.892257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 11:19:30.892236       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 11:19:30.992294       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 11:19:30.992315       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 11:19:30.993042       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 11:19:30.993055       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1153b7ac7d16] <==
	I0929 11:19:28.187267       1 serving.go:386] Generated self-signed cert in-memory
	W0929 11:19:29.407349       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 11:19:29.407400       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 11:19:29.407413       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 11:19:29.407423       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 11:19:29.422399       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 11:19:29.422419       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 11:19:29.424140       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:19:29.424168       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 11:19:29.425081       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 11:19:29.425179       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 11:19:29.524565       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [f92f6d64d692] <==
	I0929 11:19:24.385645       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Sep 29 11:28:44 functional-113333 kubelet[9100]: E0929 11:28:44.229440    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:28:54 functional-113333 kubelet[9100]: E0929 11:28:54.231239    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:28:54 functional-113333 kubelet[9100]: E0929 11:28:54.231313    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:28:55 functional-113333 kubelet[9100]: E0929 11:28:55.229529    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:28:55 functional-113333 kubelet[9100]: E0929 11:28:55.231444    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:29:06 functional-113333 kubelet[9100]: E0929 11:29:06.231646    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:29:07 functional-113333 kubelet[9100]: E0929 11:29:07.232057    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:29:08 functional-113333 kubelet[9100]: E0929 11:29:08.229703    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:29:08 functional-113333 kubelet[9100]: E0929 11:29:08.231472    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:29:17 functional-113333 kubelet[9100]: E0929 11:29:17.232415    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:29:19 functional-113333 kubelet[9100]: E0929 11:29:19.230395    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:29:20 functional-113333 kubelet[9100]: E0929 11:29:20.232144    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:29:21 functional-113333 kubelet[9100]: E0929 11:29:21.232176    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:29:28 functional-113333 kubelet[9100]: E0929 11:29:28.231323    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:29:31 functional-113333 kubelet[9100]: E0929 11:29:31.229772    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:29:31 functional-113333 kubelet[9100]: E0929 11:29:31.231995    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:29:32 functional-113333 kubelet[9100]: E0929 11:29:32.231314    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:29:43 functional-113333 kubelet[9100]: E0929 11:29:43.231494    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:29:44 functional-113333 kubelet[9100]: E0929 11:29:44.230161    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:29:46 functional-113333 kubelet[9100]: E0929 11:29:46.231901    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	Sep 29 11:29:47 functional-113333 kubelet[9100]: E0929 11:29:47.232095    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:29:57 functional-113333 kubelet[9100]: E0929 11:29:57.232382    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/mysql:5.7\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/mysql-5bb876957f-7fc8m" podUID="15138e7a-750d-441a-9416-b3684980644f"
	Sep 29 11:29:58 functional-113333 kubelet[9100]: E0929 11:29:58.229823    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="686185b3-6518-44ab-a785-e5ad567bf76c"
	Sep 29 11:29:59 functional-113333 kubelet[9100]: E0929 11:29:59.232360    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-vxgjm" podUID="367d1ac4-a750-4f02-9e98-a40f80485812"
	Sep 29 11:30:00 functional-113333 kubelet[9100]: E0929 11:30:00.231333    9100 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xb9xs" podUID="65959828-b43c-46d9-aaf1-caea5d07f5dd"
	
	
	==> storage-provisioner [0c1510903edf] <==
	W0929 11:29:44.354966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:46.357693       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:46.361814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:48.364663       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:48.373287       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:50.376525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:50.383293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:52.387013       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:52.392112       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:54.395496       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:54.399413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:56.403127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:56.408447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:58.411911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:29:58.416489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:00.419801       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:00.423822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:02.427035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:02.430903       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:04.434289       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:04.438299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:06.441519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:06.446770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:08.449792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:30:08.455343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [b3296caa44f9] <==
	I0929 11:18:45.075990       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 11:18:45.082442       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 11:18:45.082490       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 11:18:45.084662       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:48.539506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:52.799812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:56.398213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:18:59.451540       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.473739       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.478257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:19:02.478435       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 11:19:02.478502       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"bc00fa55-b5d7-4096-ad35-b571280c955a", APIVersion:"v1", ResourceVersion:"556", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488 became leader
	I0929 11:19:02.478593       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
	W0929 11:19:02.480302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:02.483444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 11:19:02.578842       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-113333_24fadb53-6855-4ec5-aad1-993b9e947488!
	W0929 11:19:04.486480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:04.490606       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:06.494256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:06.498085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:08.501237       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:08.506582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:10.509604       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 11:19:10.513944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-113333 -n functional-113333
helpers_test.go:269: (dbg) Run:  kubectl --context functional-113333 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/MySQL]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1 (78.74602ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:05 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.11
	IPs:
	  IP:  10.244.0.11
	Containers:
	  mount-munger:
	    Container ID:  docker://813edc572aee3fca8ca39332981b0dc962ca018d4ff0c26f83d50d21bf947de7
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 11:20:07 +0000
	      Finished:     Mon, 29 Sep 2025 11:20:07 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-n7jzg (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-n7jzg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-113333
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.434s (1.434s including waiting). Image size: 4403845 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             mysql-5bb876957f-7fc8m
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:07 +0000
	Labels:           app=mysql
	                  pod-template-hash=5bb876957f
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.12
	IPs:
	  IP:           10.244.0.12
	Controlled By:  ReplicaSet/mysql-5bb876957f
	Containers:
	  mysql:
	    Container ID:   
	    Image:          docker.io/mysql:5.7
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pwbxp (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-pwbxp:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/mysql-5bb876957f-7fc8m to functional-113333
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "docker.io/mysql:5.7"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "docker.io/mysql:5.7": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Normal   BackOff    4m54s (x21 over 10m)  kubelet            Back-off pulling image "docker.io/mysql:5.7"
	  Warning  Failed     4m54s (x21 over 10m)  kubelet            Error: ImagePullBackOff
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-113333/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 11:20:00 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vqmng (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-vqmng:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  10m                    default-scheduler  Successfully assigned default/sp-pod to functional-113333
	  Warning  Failed     10m                    kubelet            Failed to pull image "docker.io/nginx": toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   Pulling    7m21s (x5 over 10m)    kubelet            Pulling image "docker.io/nginx"
	  Warning  Failed     7m21s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     7m21s (x4 over 9m54s)  kubelet            Failed to pull image "docker.io/nginx": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
	  Normal   BackOff    5m2s (x21 over 10m)    kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     5m2s (x21 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-vxgjm" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-xb9xs" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-113333 describe pod busybox-mount mysql-5bb876957f-7fc8m sp-pod dashboard-metrics-scraper-77bf4d6c4c-vxgjm kubernetes-dashboard-855c9754f9-xb9xs: exit status 1
--- FAIL: TestFunctional/parallel/MySQL (602.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (271.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p calico-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (4m31.670471471s)

                                                
                                                
-- stdout --
	* [calico-934155] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-934155" primary control-plane node in "calico-934155" cluster
	* Pulling base image v0.0.48 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 12:00:16.268697  714047 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:00:16.268806  714047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:00:16.268814  714047 out.go:374] Setting ErrFile to fd 2...
	I0929 12:00:16.268818  714047 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:00:16.269061  714047 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:00:16.269572  714047 out.go:368] Setting JSON to false
	I0929 12:00:16.270911  714047 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6160,"bootTime":1759141056,"procs":311,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:00:16.271023  714047 start.go:140] virtualization: kvm guest
	I0929 12:00:16.273071  714047 out.go:179] * [calico-934155] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:00:16.274414  714047 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:00:16.274406  714047 notify.go:220] Checking for updates...
	I0929 12:00:16.276892  714047 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:00:16.278162  714047 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:00:16.279260  714047 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:00:16.280332  714047 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:00:16.281357  714047 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:00:16.283044  714047 config.go:182] Loaded profile config "cert-expiration-788277": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:00:16.283187  714047 config.go:182] Loaded profile config "kindnet-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:00:16.283306  714047 config.go:182] Loaded profile config "kubernetes-upgrade-695405": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:00:16.283425  714047 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:00:16.310032  714047 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:00:16.310162  714047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:00:16.371812  714047 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 12:00:16.360323762 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:00:16.371993  714047 docker.go:318] overlay module found
	I0929 12:00:16.373690  714047 out.go:179] * Using the docker driver based on user configuration
	I0929 12:00:16.375076  714047 start.go:304] selected driver: docker
	I0929 12:00:16.375092  714047 start.go:924] validating driver "docker" against <nil>
	I0929 12:00:16.375104  714047 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:00:16.375649  714047 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:00:16.431957  714047 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-29 12:00:16.422375815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:00:16.432146  714047 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 12:00:16.432378  714047 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:00:16.434501  714047 out.go:179] * Using Docker driver with root privileges
	I0929 12:00:16.435570  714047 cni.go:84] Creating CNI manager for "calico"
	I0929 12:00:16.435591  714047 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0929 12:00:16.435705  714047 start.go:348] cluster config:
	{Name:calico-934155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-934155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0929 12:00:16.437182  714047 out.go:179] * Starting "calico-934155" primary control-plane node in "calico-934155" cluster
	I0929 12:00:16.438361  714047 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:00:16.439476  714047 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:00:16.440596  714047 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:00:16.440640  714047 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:00:16.440643  714047 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 12:00:16.440676  714047 cache.go:58] Caching tarball of preloaded images
	I0929 12:00:16.440817  714047 preload.go:172] Found /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 12:00:16.440838  714047 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 12:00:16.440962  714047 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/config.json ...
	I0929 12:00:16.440988  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/config.json: {Name:mkbd40efb729fbf6c2c6ca56593d25ce6dd06c76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:16.464142  714047 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:00:16.464165  714047 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:00:16.464187  714047 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:00:16.464221  714047 start.go:360] acquireMachinesLock for calico-934155: {Name:mke2535433bf44d541ca3107fd49e7a99a7101b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:00:16.464349  714047 start.go:364] duration metric: took 94.553µs to acquireMachinesLock for "calico-934155"
	I0929 12:00:16.464383  714047 start.go:93] Provisioning new machine with config: &{Name:calico-934155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-934155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:00:16.464486  714047 start.go:125] createHost starting for "" (driver="docker")
	I0929 12:00:16.467575  714047 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 12:00:16.467797  714047 start.go:159] libmachine.API.Create for "calico-934155" (driver="docker")
	I0929 12:00:16.467826  714047 client.go:168] LocalClient.Create starting
	I0929 12:00:16.467912  714047 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem
	I0929 12:00:16.467971  714047 main.go:141] libmachine: Decoding PEM data...
	I0929 12:00:16.467987  714047 main.go:141] libmachine: Parsing certificate...
	I0929 12:00:16.468053  714047 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem
	I0929 12:00:16.468097  714047 main.go:141] libmachine: Decoding PEM data...
	I0929 12:00:16.468114  714047 main.go:141] libmachine: Parsing certificate...
	I0929 12:00:16.468484  714047 cli_runner.go:164] Run: docker network inspect calico-934155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 12:00:16.486871  714047 cli_runner.go:211] docker network inspect calico-934155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 12:00:16.486983  714047 network_create.go:284] running [docker network inspect calico-934155] to gather additional debugging logs...
	I0929 12:00:16.487011  714047 cli_runner.go:164] Run: docker network inspect calico-934155
	W0929 12:00:16.504573  714047 cli_runner.go:211] docker network inspect calico-934155 returned with exit code 1
	I0929 12:00:16.504609  714047 network_create.go:287] error running [docker network inspect calico-934155]: docker network inspect calico-934155: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-934155 not found
	I0929 12:00:16.504626  714047 network_create.go:289] output of [docker network inspect calico-934155]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-934155 not found
	
	** /stderr **
	I0929 12:00:16.504737  714047 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:00:16.523016  714047 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-194f2c805d9d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:95:7f:1a:a5:02} reservation:<nil>}
	I0929 12:00:16.523839  714047 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-a8b5695ccabb IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:69:aa:e9:91:90} reservation:<nil>}
	I0929 12:00:16.524327  714047 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-7e910248d778 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:26:83:9a:10:63:cb} reservation:<nil>}
	I0929 12:00:16.524861  714047 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3103e9adae0b IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:32:29:41:e6:ab:f1} reservation:<nil>}
	I0929 12:00:16.525828  714047 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001cbc090}
	I0929 12:00:16.525871  714047 network_create.go:124] attempt to create docker network calico-934155 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0929 12:00:16.525932  714047 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-934155 calico-934155
	I0929 12:00:16.586557  714047 network_create.go:108] docker network calico-934155 192.168.85.0/24 created
	I0929 12:00:16.586586  714047 kic.go:121] calculated static IP "192.168.85.2" for the "calico-934155" container
	I0929 12:00:16.586655  714047 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 12:00:16.604973  714047 cli_runner.go:164] Run: docker volume create calico-934155 --label name.minikube.sigs.k8s.io=calico-934155 --label created_by.minikube.sigs.k8s.io=true
	I0929 12:00:16.624675  714047 oci.go:103] Successfully created a docker volume calico-934155
	I0929 12:00:16.624756  714047 cli_runner.go:164] Run: docker run --rm --name calico-934155-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-934155 --entrypoint /usr/bin/test -v calico-934155:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 12:00:17.013815  714047 oci.go:107] Successfully prepared a docker volume calico-934155
	I0929 12:00:17.013888  714047 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:00:17.013916  714047 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 12:00:17.014036  714047 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-934155:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 12:00:20.435210  714047 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-934155:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.421105354s)
	I0929 12:00:20.435257  714047 kic.go:203] duration metric: took 3.421336062s to extract preloaded images to volume ...
	W0929 12:00:20.435351  714047 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0929 12:00:20.435410  714047 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0929 12:00:20.435468  714047 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 12:00:20.501020  714047 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-934155 --name calico-934155 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-934155 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-934155 --network calico-934155 --ip 192.168.85.2 --volume calico-934155:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 12:00:20.775361  714047 cli_runner.go:164] Run: docker container inspect calico-934155 --format={{.State.Running}}
	I0929 12:00:20.796245  714047 cli_runner.go:164] Run: docker container inspect calico-934155 --format={{.State.Status}}
	I0929 12:00:20.815560  714047 cli_runner.go:164] Run: docker exec calico-934155 stat /var/lib/dpkg/alternatives/iptables
	I0929 12:00:20.864625  714047 oci.go:144] the created container "calico-934155" has a running status.
	I0929 12:00:20.864678  714047 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa...
	I0929 12:00:21.233928  714047 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 12:00:21.267016  714047 cli_runner.go:164] Run: docker container inspect calico-934155 --format={{.State.Status}}
	I0929 12:00:21.299420  714047 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 12:00:21.299458  714047 kic_runner.go:114] Args: [docker exec --privileged calico-934155 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 12:00:21.349368  714047 cli_runner.go:164] Run: docker container inspect calico-934155 --format={{.State.Status}}
	I0929 12:00:21.371186  714047 machine.go:93] provisionDockerMachine start ...
	I0929 12:00:21.371291  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:21.392085  714047 main.go:141] libmachine: Using SSH client type: native
	I0929 12:00:21.392317  714047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I0929 12:00:21.392328  714047 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:00:21.533756  714047 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-934155
	
	I0929 12:00:21.533788  714047 ubuntu.go:182] provisioning hostname "calico-934155"
	I0929 12:00:21.533851  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:21.555304  714047 main.go:141] libmachine: Using SSH client type: native
	I0929 12:00:21.555617  714047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I0929 12:00:21.555640  714047 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-934155 && echo "calico-934155" | sudo tee /etc/hostname
	I0929 12:00:21.714438  714047 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-934155
	
	I0929 12:00:21.714523  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:21.733106  714047 main.go:141] libmachine: Using SSH client type: native
	I0929 12:00:21.733333  714047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I0929 12:00:21.733355  714047 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-934155' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-934155/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-934155' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:00:21.875640  714047 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:00:21.875677  714047 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:00:21.875705  714047 ubuntu.go:190] setting up certificates
	I0929 12:00:21.875726  714047 provision.go:84] configureAuth start
	I0929 12:00:21.875795  714047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-934155
	I0929 12:00:21.896815  714047 provision.go:143] copyHostCerts
	I0929 12:00:21.896895  714047 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:00:21.896907  714047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:00:21.896987  714047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:00:21.897114  714047 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:00:21.897128  714047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:00:21.897172  714047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:00:21.897460  714047 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:00:21.897500  714047 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:00:21.897548  714047 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:00:21.898252  714047 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.calico-934155 san=[127.0.0.1 192.168.85.2 calico-934155 localhost minikube]
	I0929 12:00:22.252837  714047 provision.go:177] copyRemoteCerts
	I0929 12:00:22.253133  714047 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:00:22.253192  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:22.301556  714047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa Username:docker}
	I0929 12:00:22.414640  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:00:22.456284  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 12:00:22.493498  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 12:00:22.538446  714047 provision.go:87] duration metric: took 662.678036ms to configureAuth
	I0929 12:00:22.538535  714047 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:00:22.538783  714047 config.go:182] Loaded profile config "calico-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:00:22.538864  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:22.563428  714047 main.go:141] libmachine: Using SSH client type: native
	I0929 12:00:22.563759  714047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I0929 12:00:22.563779  714047 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:00:22.719640  714047 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:00:22.719675  714047 ubuntu.go:71] root file system type: overlay
	I0929 12:00:22.719821  714047 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:00:22.720081  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:22.743739  714047 main.go:141] libmachine: Using SSH client type: native
	I0929 12:00:22.744188  714047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I0929 12:00:22.744362  714047 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:00:22.910972  714047 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:00:22.911084  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:22.931586  714047 main.go:141] libmachine: Using SSH client type: native
	I0929 12:00:22.931916  714047 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I0929 12:00:22.931946  714047 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:00:24.564050  714047 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:55:49.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 12:00:22.908691372 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 12:00:24.564089  714047 machine.go:96] duration metric: took 3.192871189s to provisionDockerMachine
	I0929 12:00:24.564107  714047 client.go:171] duration metric: took 8.096274759s to LocalClient.Create
	I0929 12:00:24.564141  714047 start.go:167] duration metric: took 8.096344136s to libmachine.API.Create "calico-934155"
	I0929 12:00:24.564156  714047 start.go:293] postStartSetup for "calico-934155" (driver="docker")
	I0929 12:00:24.564197  714047 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:00:24.564280  714047 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:00:24.564336  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:24.590325  714047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa Username:docker}
	I0929 12:00:24.702050  714047 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:00:24.707009  714047 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:00:24.707052  714047 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:00:24.707068  714047 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:00:24.707078  714047 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:00:24.707093  714047 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:00:24.707167  714047 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:00:24.707282  714047 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:00:24.707410  714047 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:00:24.719928  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:00:24.767619  714047 start.go:296] duration metric: took 203.428548ms for postStartSetup
	I0929 12:00:24.768110  714047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-934155
	I0929 12:00:24.790341  714047 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/config.json ...
	I0929 12:00:24.790661  714047 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:00:24.790712  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:24.814572  714047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa Username:docker}
	I0929 12:00:24.915944  714047 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:00:24.920984  714047 start.go:128] duration metric: took 8.456477769s to createHost
	I0929 12:00:24.921010  714047 start.go:83] releasing machines lock for "calico-934155", held for 8.456645607s
	I0929 12:00:24.921080  714047 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-934155
	I0929 12:00:24.943606  714047 ssh_runner.go:195] Run: cat /version.json
	I0929 12:00:24.943667  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:24.943699  714047 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:00:24.943784  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:24.966405  714047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa Username:docker}
	I0929 12:00:24.967316  714047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa Username:docker}
	I0929 12:00:25.150805  714047 ssh_runner.go:195] Run: systemctl --version
	I0929 12:00:25.156199  714047 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:00:25.161366  714047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:00:25.195108  714047 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:00:25.195173  714047 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:00:25.225707  714047 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 12:00:25.225740  714047 start.go:495] detecting cgroup driver to use...
	I0929 12:00:25.225773  714047 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:00:25.225966  714047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:00:25.243632  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:00:25.255898  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:00:25.267295  714047 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:00:25.267355  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:00:25.278361  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:00:25.289515  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:00:25.300583  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:00:25.311284  714047 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:00:25.322171  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:00:25.332932  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:00:25.343621  714047 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:00:25.354351  714047 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:00:25.364332  714047 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:00:25.373587  714047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:00:25.447395  714047 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:00:25.529630  714047 start.go:495] detecting cgroup driver to use...
	I0929 12:00:25.529687  714047 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:00:25.529755  714047 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:00:25.545518  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:00:25.561962  714047 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:00:25.581992  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:00:25.597392  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:00:25.611892  714047 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:00:25.633789  714047 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:00:25.638413  714047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:00:25.651801  714047 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:00:25.676306  714047 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:00:25.761309  714047 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:00:25.847297  714047 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:00:25.847422  714047 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:00:25.870346  714047 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:00:25.884652  714047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:00:25.973835  714047 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:00:26.774513  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:00:26.786688  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:00:26.798479  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:00:26.810609  714047 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:00:26.883937  714047 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:00:26.952709  714047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:00:27.022210  714047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:00:27.042268  714047 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:00:27.055041  714047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:00:27.123689  714047 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:00:27.199271  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:00:27.212605  714047 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:00:27.212665  714047 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:00:27.216532  714047 start.go:563] Will wait 60s for crictl version
	I0929 12:00:27.216598  714047 ssh_runner.go:195] Run: which crictl
	I0929 12:00:27.219950  714047 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:00:27.258370  714047 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:00:27.258431  714047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:00:27.286641  714047 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:00:27.316007  714047 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:00:27.316090  714047 cli_runner.go:164] Run: docker network inspect calico-934155 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:00:27.334172  714047 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 12:00:27.338131  714047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:00:27.349897  714047 kubeadm.go:875] updating cluster {Name:calico-934155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-934155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:00:27.350026  714047 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:00:27.350065  714047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:00:27.371644  714047 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:00:27.371663  714047 docker.go:621] Images already preloaded, skipping extraction
	I0929 12:00:27.371719  714047 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:00:27.393008  714047 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:00:27.393030  714047 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:00:27.393040  714047 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 docker true true} ...
	I0929 12:00:27.393152  714047 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-934155 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-934155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0929 12:00:27.393208  714047 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:00:27.443440  714047 cni.go:84] Creating CNI manager for "calico"
	I0929 12:00:27.443463  714047 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:00:27.443483  714047 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-934155 NodeName:calico-934155 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubern
etes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:00:27.443658  714047 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-934155"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:00:27.443719  714047 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:00:27.453321  714047 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:00:27.453389  714047 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:00:27.462450  714047 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 12:00:27.480472  714047 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:00:27.515171  714047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2213 bytes)
	I0929 12:00:27.547058  714047 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:00:27.551501  714047 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:00:27.564603  714047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:00:27.666403  714047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:00:27.735643  714047 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155 for IP: 192.168.85.2
	I0929 12:00:27.735664  714047 certs.go:194] generating shared ca certs ...
	I0929 12:00:27.735691  714047 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:27.735854  714047 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:00:27.735935  714047 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:00:27.735953  714047 certs.go:256] generating profile certs ...
	I0929 12:00:27.736025  714047 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/client.key
	I0929 12:00:27.736043  714047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/client.crt with IP's: []
	I0929 12:00:28.115345  714047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/client.crt ...
	I0929 12:00:28.115372  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/client.crt: {Name:mkbe6f9193bb776979dcd7930dd60204c5e23217 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:28.115532  714047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/client.key ...
	I0929 12:00:28.115545  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/client.key: {Name:mk978e235279b2248d70c6ff9f6d21d3617ae8dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:28.115668  714047 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.key.e56520c9
	I0929 12:00:28.115684  714047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.crt.e56520c9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0929 12:00:28.230362  714047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.crt.e56520c9 ...
	I0929 12:00:28.230401  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.crt.e56520c9: {Name:mkbf0934f44387d9a9cda527d730ede62277c817 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:28.230662  714047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.key.e56520c9 ...
	I0929 12:00:28.230689  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.key.e56520c9: {Name:mk6397c364dd1b210b773a0c3dcd6878b0770a86 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:28.230833  714047 certs.go:381] copying /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.crt.e56520c9 -> /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.crt
	I0929 12:00:28.230977  714047 certs.go:385] copying /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.key.e56520c9 -> /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.key
	I0929 12:00:28.231081  714047 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.key
	I0929 12:00:28.231109  714047 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.crt with IP's: []
	I0929 12:00:28.872790  714047 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.crt ...
	I0929 12:00:28.872820  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.crt: {Name:mk7d24ba66f6a83eb4454d658dadbb55b6d8c200 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:28.873006  714047 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.key ...
	I0929 12:00:28.873024  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.key: {Name:mkf9b2af0e44d67582d1d510fa7997147e57c3a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:28.873234  714047 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:00:28.873272  714047 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:00:28.873283  714047 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:00:28.873302  714047 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:00:28.873324  714047 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:00:28.873344  714047 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:00:28.873388  714047 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:00:28.874117  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:00:28.903063  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:00:28.934241  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:00:28.962799  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:00:28.990581  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 12:00:29.016174  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:00:29.052516  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:00:29.080582  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/calico-934155/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:00:29.109130  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:00:29.138884  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:00:29.166308  714047 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:00:29.192174  714047 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:00:29.211870  714047 ssh_runner.go:195] Run: openssl version
	I0929 12:00:29.217928  714047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:00:29.229120  714047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:00:29.233244  714047 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:00:29.233308  714047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:00:29.240903  714047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:00:29.251335  714047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:00:29.261548  714047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:00:29.265580  714047 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:00:29.265650  714047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:00:29.272999  714047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:00:29.284630  714047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:00:29.295589  714047 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:00:29.299854  714047 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:00:29.299939  714047 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:00:29.309585  714047 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:00:29.320823  714047 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:00:29.324802  714047 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 12:00:29.324857  714047 kubeadm.go:392] StartCluster: {Name:calico-934155 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-934155 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:00:29.325002  714047 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:00:29.349247  714047 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:00:29.360818  714047 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 12:00:29.374171  714047 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 12:00:29.374233  714047 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 12:00:29.384349  714047 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 12:00:29.384370  714047 kubeadm.go:157] found existing configuration files:
	
	I0929 12:00:29.384419  714047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 12:00:29.394131  714047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 12:00:29.394186  714047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 12:00:29.403796  714047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 12:00:29.413598  714047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 12:00:29.413671  714047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 12:00:29.422967  714047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 12:00:29.432333  714047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 12:00:29.432405  714047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 12:00:29.441883  714047 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 12:00:29.451008  714047 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 12:00:29.451074  714047 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 12:00:29.460138  714047 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 12:00:29.526302  714047 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1040-gcp\n", err: exit status 1
	I0929 12:00:29.588967  714047 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 12:00:41.025605  714047 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 12:00:41.025687  714047 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 12:00:41.025820  714047 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 12:00:41.025934  714047 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1040-gcp
	I0929 12:00:41.025981  714047 kubeadm.go:310] OS: Linux
	I0929 12:00:41.026082  714047 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 12:00:41.026180  714047 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 12:00:41.026260  714047 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 12:00:41.026363  714047 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 12:00:41.026439  714047 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 12:00:41.026517  714047 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 12:00:41.026569  714047 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 12:00:41.026632  714047 kubeadm.go:310] CGROUPS_IO: enabled
	I0929 12:00:41.026790  714047 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 12:00:41.026964  714047 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 12:00:41.027115  714047 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 12:00:41.027241  714047 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 12:00:41.030175  714047 out.go:252]   - Generating certificates and keys ...
	I0929 12:00:41.030268  714047 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 12:00:41.030358  714047 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 12:00:41.030449  714047 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 12:00:41.030551  714047 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 12:00:41.030639  714047 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 12:00:41.030719  714047 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 12:00:41.030797  714047 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 12:00:41.030989  714047 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-934155 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0929 12:00:41.031072  714047 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 12:00:41.031265  714047 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-934155 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0929 12:00:41.031390  714047 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 12:00:41.031486  714047 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 12:00:41.031549  714047 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 12:00:41.031638  714047 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 12:00:41.031718  714047 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 12:00:41.031798  714047 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 12:00:41.031935  714047 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 12:00:41.032047  714047 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 12:00:41.032142  714047 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 12:00:41.032259  714047 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 12:00:41.032356  714047 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 12:00:41.033494  714047 out.go:252]   - Booting up control plane ...
	I0929 12:00:41.033626  714047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 12:00:41.033744  714047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 12:00:41.033842  714047 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 12:00:41.033984  714047 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 12:00:41.034093  714047 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 12:00:41.034210  714047 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 12:00:41.034323  714047 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 12:00:41.034393  714047 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 12:00:41.034572  714047 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 12:00:41.034739  714047 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 12:00:41.034812  714047 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001814894s
	I0929 12:00:41.034927  714047 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 12:00:41.035021  714047 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0929 12:00:41.035095  714047 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 12:00:41.035158  714047 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 12:00:41.035221  714047 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.609763822s
	I0929 12:00:41.035278  714047 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.982583054s
	I0929 12:00:41.035331  714047 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.001917075s
	I0929 12:00:41.035417  714047 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 12:00:41.035528  714047 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 12:00:41.035580  714047 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 12:00:41.035825  714047 kubeadm.go:310] [mark-control-plane] Marking the node calico-934155 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 12:00:41.035953  714047 kubeadm.go:310] [bootstrap-token] Using token: lv5jvm.a7u41y6mem9dztbv
	I0929 12:00:41.037407  714047 out.go:252]   - Configuring RBAC rules ...
	I0929 12:00:41.037536  714047 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 12:00:41.037659  714047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 12:00:41.037846  714047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 12:00:41.038047  714047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 12:00:41.038196  714047 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 12:00:41.038327  714047 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 12:00:41.038425  714047 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 12:00:41.038467  714047 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 12:00:41.038509  714047 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 12:00:41.038515  714047 kubeadm.go:310] 
	I0929 12:00:41.038593  714047 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 12:00:41.038613  714047 kubeadm.go:310] 
	I0929 12:00:41.038730  714047 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 12:00:41.038743  714047 kubeadm.go:310] 
	I0929 12:00:41.038786  714047 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 12:00:41.038867  714047 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 12:00:41.038988  714047 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 12:00:41.039004  714047 kubeadm.go:310] 
	I0929 12:00:41.039074  714047 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 12:00:41.039088  714047 kubeadm.go:310] 
	I0929 12:00:41.039156  714047 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 12:00:41.039177  714047 kubeadm.go:310] 
	I0929 12:00:41.039258  714047 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 12:00:41.039385  714047 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 12:00:41.039474  714047 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 12:00:41.039483  714047 kubeadm.go:310] 
	I0929 12:00:41.039601  714047 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 12:00:41.039708  714047 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 12:00:41.039718  714047 kubeadm.go:310] 
	I0929 12:00:41.039813  714047 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lv5jvm.a7u41y6mem9dztbv \
	I0929 12:00:41.039983  714047 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56a87e2f5374a5309464fa1eb59b1c3e3c0ac1144c877e4b4247536ac332ae7e \
	I0929 12:00:41.040018  714047 kubeadm.go:310] 	--control-plane 
	I0929 12:00:41.040027  714047 kubeadm.go:310] 
	I0929 12:00:41.040164  714047 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 12:00:41.040185  714047 kubeadm.go:310] 
	I0929 12:00:41.040293  714047 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lv5jvm.a7u41y6mem9dztbv \
	I0929 12:00:41.040465  714047 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:56a87e2f5374a5309464fa1eb59b1c3e3c0ac1144c877e4b4247536ac332ae7e 
	I0929 12:00:41.040483  714047 cni.go:84] Creating CNI manager for "calico"
	I0929 12:00:41.041923  714047 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0929 12:00:41.044544  714047 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 12:00:41.044573  714047 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0929 12:00:41.072792  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 12:00:42.182357  714047 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.10952289s)
	I0929 12:00:42.182412  714047 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 12:00:42.182515  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-934155 minikube.k8s.io/updated_at=2025_09_29T12_00_42_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf minikube.k8s.io/name=calico-934155 minikube.k8s.io/primary=true
	I0929 12:00:42.182725  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:42.193590  714047 ops.go:34] apiserver oom_adj: -16
	I0929 12:00:42.287340  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:42.788116  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:43.288066  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:43.788408  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:44.287466  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:44.788089  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:45.287478  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:45.788108  714047 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 12:00:45.862557  714047 kubeadm.go:1105] duration metric: took 3.679907629s to wait for elevateKubeSystemPrivileges
	I0929 12:00:45.862595  714047 kubeadm.go:394] duration metric: took 16.537741119s to StartCluster
	I0929 12:00:45.862618  714047 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:45.862698  714047 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:00:45.864256  714047 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:00:45.864527  714047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 12:00:45.864526  714047 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:00:45.864554  714047 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:00:45.864753  714047 config.go:182] Loaded profile config "calico-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:00:45.864633  714047 addons.go:69] Setting storage-provisioner=true in profile "calico-934155"
	I0929 12:00:45.864852  714047 addons.go:238] Setting addon storage-provisioner=true in "calico-934155"
	I0929 12:00:45.864639  714047 addons.go:69] Setting default-storageclass=true in profile "calico-934155"
	I0929 12:00:45.864915  714047 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-934155"
	I0929 12:00:45.864929  714047 host.go:66] Checking if "calico-934155" exists ...
	I0929 12:00:45.865323  714047 cli_runner.go:164] Run: docker container inspect calico-934155 --format={{.State.Status}}
	I0929 12:00:45.865588  714047 cli_runner.go:164] Run: docker container inspect calico-934155 --format={{.State.Status}}
	I0929 12:00:45.866746  714047 out.go:179] * Verifying Kubernetes components...
	I0929 12:00:45.868770  714047 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:00:45.900920  714047 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:00:45.901443  714047 addons.go:238] Setting addon default-storageclass=true in "calico-934155"
	I0929 12:00:45.901492  714047 host.go:66] Checking if "calico-934155" exists ...
	I0929 12:00:45.902039  714047 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:00:45.902057  714047 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:00:45.902112  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:45.902626  714047 cli_runner.go:164] Run: docker container inspect calico-934155 --format={{.State.Status}}
	I0929 12:00:45.934358  714047 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:00:45.934461  714047 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:00:45.934552  714047 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-934155
	I0929 12:00:45.939209  714047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa Username:docker}
	I0929 12:00:45.963935  714047 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/calico-934155/id_rsa Username:docker}
	I0929 12:00:45.998545  714047 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 12:00:46.050095  714047 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:00:46.100905  714047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:00:46.124169  714047 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:00:46.379162  714047 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0929 12:00:46.381494  714047 node_ready.go:35] waiting up to 15m0s for node "calico-934155" to be "Ready" ...
	I0929 12:00:46.397114  714047 node_ready.go:49] node "calico-934155" is "Ready"
	I0929 12:00:46.397177  714047 node_ready.go:38] duration metric: took 15.625097ms for node "calico-934155" to be "Ready" ...
	I0929 12:00:46.397221  714047 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:00:46.397318  714047 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:00:46.690403  714047 api_server.go:72] duration metric: took 825.77866ms to wait for apiserver process to appear ...
	I0929 12:00:46.690461  714047 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:00:46.690489  714047 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 12:00:46.697749  714047 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 12:00:46.698974  714047 api_server.go:141] control plane version: v1.34.0
	I0929 12:00:46.699002  714047 api_server.go:131] duration metric: took 8.531645ms to wait for apiserver health ...
	I0929 12:00:46.699013  714047 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:00:46.707071  714047 system_pods.go:59] 10 kube-system pods found
	I0929 12:00:46.707180  714047 system_pods.go:61] "calico-kube-controllers-59556d9b4c-9bp7j" [8224637f-2d90-4d8f-adc7-d565d6cb66ee] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 12:00:46.707236  714047 system_pods.go:61] "calico-node-m92ff" [68d47464-6564-4be4-a043-c077e41da417] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 12:00:46.707285  714047 system_pods.go:61] "coredns-66bc5c9577-mhprz" [4d7ab450-9b27-475a-920c-dd5717ffe88b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:46.707309  714047 system_pods.go:61] "coredns-66bc5c9577-n2j2d" [94b118e5-d409-4889-b8d5-79d0859e472f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:46.707354  714047 system_pods.go:61] "etcd-calico-934155" [b93f272f-5635-47ac-9b0b-8c3d97a473ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:00:46.707380  714047 system_pods.go:61] "kube-apiserver-calico-934155" [cc1485cf-e407-45a2-acd2-98d4c6e77ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:00:46.707390  714047 system_pods.go:61] "kube-controller-manager-calico-934155" [e642f20c-ed93-4da5-94a0-430f0c027bea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:00:46.707402  714047 system_pods.go:61] "kube-proxy-cgfcg" [bb43dcf7-a5f8-41a4-8093-448c0fdf0226] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:00:46.707409  714047 system_pods.go:61] "kube-scheduler-calico-934155" [0bb4ee26-844e-4667-a112-39c33b8ed008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:00:46.707415  714047 system_pods.go:61] "storage-provisioner" [31bb77af-709e-4517-9ce5-e9900a4170ae] Pending
	I0929 12:00:46.707424  714047 system_pods.go:74] duration metric: took 8.401723ms to wait for pod list to return data ...
	I0929 12:00:46.707433  714047 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:00:46.714693  714047 default_sa.go:45] found service account: "default"
	I0929 12:00:46.714728  714047 default_sa.go:55] duration metric: took 7.286262ms for default service account to be created ...
	I0929 12:00:46.714741  714047 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:00:46.714808  714047 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 12:00:46.716465  714047 addons.go:514] duration metric: took 851.910319ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 12:00:46.721501  714047 system_pods.go:86] 10 kube-system pods found
	I0929 12:00:46.721542  714047 system_pods.go:89] "calico-kube-controllers-59556d9b4c-9bp7j" [8224637f-2d90-4d8f-adc7-d565d6cb66ee] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 12:00:46.721556  714047 system_pods.go:89] "calico-node-m92ff" [68d47464-6564-4be4-a043-c077e41da417] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 12:00:46.721573  714047 system_pods.go:89] "coredns-66bc5c9577-mhprz" [4d7ab450-9b27-475a-920c-dd5717ffe88b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:46.721587  714047 system_pods.go:89] "coredns-66bc5c9577-n2j2d" [94b118e5-d409-4889-b8d5-79d0859e472f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:46.721600  714047 system_pods.go:89] "etcd-calico-934155" [b93f272f-5635-47ac-9b0b-8c3d97a473ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:00:46.721614  714047 system_pods.go:89] "kube-apiserver-calico-934155" [cc1485cf-e407-45a2-acd2-98d4c6e77ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:00:46.721629  714047 system_pods.go:89] "kube-controller-manager-calico-934155" [e642f20c-ed93-4da5-94a0-430f0c027bea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:00:46.721645  714047 system_pods.go:89] "kube-proxy-cgfcg" [bb43dcf7-a5f8-41a4-8093-448c0fdf0226] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:00:46.721657  714047 system_pods.go:89] "kube-scheduler-calico-934155" [0bb4ee26-844e-4667-a112-39c33b8ed008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:00:46.721666  714047 system_pods.go:89] "storage-provisioner" [31bb77af-709e-4517-9ce5-e9900a4170ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:00:46.721697  714047 retry.go:31] will retry after 304.137498ms: missing components: kube-dns, kube-proxy
	I0929 12:00:46.884167  714047 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-934155" context rescaled to 1 replicas
	I0929 12:00:47.030673  714047 system_pods.go:86] 10 kube-system pods found
	I0929 12:00:47.030709  714047 system_pods.go:89] "calico-kube-controllers-59556d9b4c-9bp7j" [8224637f-2d90-4d8f-adc7-d565d6cb66ee] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 12:00:47.030762  714047 system_pods.go:89] "calico-node-m92ff" [68d47464-6564-4be4-a043-c077e41da417] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 12:00:47.030775  714047 system_pods.go:89] "coredns-66bc5c9577-mhprz" [4d7ab450-9b27-475a-920c-dd5717ffe88b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:47.030781  714047 system_pods.go:89] "coredns-66bc5c9577-n2j2d" [94b118e5-d409-4889-b8d5-79d0859e472f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:47.030792  714047 system_pods.go:89] "etcd-calico-934155" [b93f272f-5635-47ac-9b0b-8c3d97a473ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:00:47.030801  714047 system_pods.go:89] "kube-apiserver-calico-934155" [cc1485cf-e407-45a2-acd2-98d4c6e77ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:00:47.030809  714047 system_pods.go:89] "kube-controller-manager-calico-934155" [e642f20c-ed93-4da5-94a0-430f0c027bea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:00:47.030813  714047 system_pods.go:89] "kube-proxy-cgfcg" [bb43dcf7-a5f8-41a4-8093-448c0fdf0226] Running
	I0929 12:00:47.030821  714047 system_pods.go:89] "kube-scheduler-calico-934155" [0bb4ee26-844e-4667-a112-39c33b8ed008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:00:47.030825  714047 system_pods.go:89] "storage-provisioner" [31bb77af-709e-4517-9ce5-e9900a4170ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:00:47.030841  714047 retry.go:31] will retry after 343.346054ms: missing components: kube-dns
	I0929 12:00:47.379292  714047 system_pods.go:86] 10 kube-system pods found
	I0929 12:00:47.379334  714047 system_pods.go:89] "calico-kube-controllers-59556d9b4c-9bp7j" [8224637f-2d90-4d8f-adc7-d565d6cb66ee] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 12:00:47.379348  714047 system_pods.go:89] "calico-node-m92ff" [68d47464-6564-4be4-a043-c077e41da417] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 12:00:47.379358  714047 system_pods.go:89] "coredns-66bc5c9577-mhprz" [4d7ab450-9b27-475a-920c-dd5717ffe88b] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:47.379366  714047 system_pods.go:89] "coredns-66bc5c9577-n2j2d" [94b118e5-d409-4889-b8d5-79d0859e472f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:47.379373  714047 system_pods.go:89] "etcd-calico-934155" [b93f272f-5635-47ac-9b0b-8c3d97a473ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:00:47.379382  714047 system_pods.go:89] "kube-apiserver-calico-934155" [cc1485cf-e407-45a2-acd2-98d4c6e77ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:00:47.379407  714047 system_pods.go:89] "kube-controller-manager-calico-934155" [e642f20c-ed93-4da5-94a0-430f0c027bea] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:00:47.379413  714047 system_pods.go:89] "kube-proxy-cgfcg" [bb43dcf7-a5f8-41a4-8093-448c0fdf0226] Running
	I0929 12:00:47.379421  714047 system_pods.go:89] "kube-scheduler-calico-934155" [0bb4ee26-844e-4667-a112-39c33b8ed008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:00:47.379430  714047 system_pods.go:89] "storage-provisioner" [31bb77af-709e-4517-9ce5-e9900a4170ae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:00:47.379447  714047 retry.go:31] will retry after 464.989894ms: missing components: kube-dns
	I0929 12:00:47.849109  714047 system_pods.go:86] 10 kube-system pods found
	I0929 12:00:47.849160  714047 system_pods.go:89] "calico-kube-controllers-59556d9b4c-9bp7j" [8224637f-2d90-4d8f-adc7-d565d6cb66ee] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 12:00:47.849175  714047 system_pods.go:89] "calico-node-m92ff" [68d47464-6564-4be4-a043-c077e41da417] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 12:00:47.849188  714047 system_pods.go:89] "coredns-66bc5c9577-mhprz" [4d7ab450-9b27-475a-920c-dd5717ffe88b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:47.849200  714047 system_pods.go:89] "coredns-66bc5c9577-n2j2d" [94b118e5-d409-4889-b8d5-79d0859e472f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:00:47.849214  714047 system_pods.go:89] "etcd-calico-934155" [b93f272f-5635-47ac-9b0b-8c3d97a473ee] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:00:47.849227  714047 system_pods.go:89] "kube-apiserver-calico-934155" [cc1485cf-e407-45a2-acd2-98d4c6e77ed9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:00:47.849237  714047 system_pods.go:89] "kube-controller-manager-calico-934155" [e642f20c-ed93-4da5-94a0-430f0c027bea] Running
	I0929 12:00:47.849244  714047 system_pods.go:89] "kube-proxy-cgfcg" [bb43dcf7-a5f8-41a4-8093-448c0fdf0226] Running
	I0929 12:00:47.849258  714047 system_pods.go:89] "kube-scheduler-calico-934155" [0bb4ee26-844e-4667-a112-39c33b8ed008] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:00:47.849266  714047 system_pods.go:89] "storage-provisioner" [31bb77af-709e-4517-9ce5-e9900a4170ae] Running
	I0929 12:00:47.849279  714047 system_pods.go:126] duration metric: took 1.134528496s to wait for k8s-apps to be running ...
	I0929 12:00:47.849292  714047 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:00:47.849343  714047 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:00:47.864927  714047 system_svc.go:56] duration metric: took 15.620965ms WaitForService to wait for kubelet
	I0929 12:00:47.864961  714047 kubeadm.go:578] duration metric: took 2.000344649s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:00:47.864982  714047 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:00:47.868600  714047 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:00:47.868630  714047 node_conditions.go:123] node cpu capacity is 8
	I0929 12:00:47.868648  714047 node_conditions.go:105] duration metric: took 3.660702ms to run NodePressure ...
	I0929 12:00:47.868662  714047 start.go:241] waiting for startup goroutines ...
	I0929 12:00:47.868672  714047 start.go:246] waiting for cluster config update ...
	I0929 12:00:47.868686  714047 start.go:255] writing updated cluster config ...
	I0929 12:00:47.869141  714047 ssh_runner.go:195] Run: rm -f paused
	I0929 12:00:47.873475  714047 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:00:47.877813  714047 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mhprz" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:00:49.883403  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:00:51.884154  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:00:53.886430  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:00:56.385912  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:00:58.386125  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:00.389532  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:02.885957  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:05.384473  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:07.884063  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:09.884379  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:11.884636  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:14.383987  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:16.384439  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:18.385256  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:20.884310  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:22.884418  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:25.384491  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:27.391292  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:29.882681  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:31.882776  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:33.883766  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:35.885052  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:38.384842  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:40.389596  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:42.391545  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:44.892450  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:47.383813  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:49.384473  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:51.495898  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:53.883245  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:56.384456  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:01:58.882957  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:00.884361  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:03.383608  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:05.384104  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:07.387913  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:09.888258  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:12.384264  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:14.883376  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:16.884347  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:18.886524  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:21.395566  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:23.885048  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:26.384343  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:28.883392  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:30.884937  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:32.885195  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:35.538633  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:37.887338  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:40.383310  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:42.384259  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:44.883396  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:46.884155  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:49.384946  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:51.883539  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:53.887437  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:56.383311  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:02:58.882974  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:00.888517  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:03.389943  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:05.885576  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:08.383822  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:10.384295  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:12.884357  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:15.385087  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:17.884904  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:20.384114  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:22.384479  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:24.883342  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:26.886895  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:29.384590  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:31.384779  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:33.883328  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:35.883394  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:38.386431  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:40.885902  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:43.384511  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:45.384581  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:47.884551  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:50.385682  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:52.883482  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:54.883675  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:56.884570  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:03:59.383651  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:01.384235  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:03.883094  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:05.883651  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:08.383398  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:10.384214  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:12.384733  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:14.883404  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:16.884201  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:18.884290  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:21.382969  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:23.384434  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:25.385925  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:27.889124  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:30.383623  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:32.882973  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:34.884498  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:37.384835  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:39.884627  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:42.394370  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:44.884436  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	W0929 12:04:46.885340  714047 pod_ready.go:104] pod "coredns-66bc5c9577-mhprz" is not "Ready", error: <nil>
	I0929 12:04:47.874178  714047 pod_ready.go:86] duration metric: took 3m59.996316413s for pod "coredns-66bc5c9577-mhprz" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:04:47.874219  714047 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0929 12:04:47.874240  714047 pod_ready.go:40] duration metric: took 4m0.000725782s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:04:47.876003  714047 out.go:203] 
	W0929 12:04:47.877816  714047 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0929 12:04:47.880201  714047 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (271.72s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-schbp" [71e083e1-076b-456d-a95a-397cfbfe8d83] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858855 -n old-k8s-version-858855
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:15:01.477578192 +0000 UTC m=+3783.521390286
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-858855 describe po kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-858855 describe po kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-schbp
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-858855/192.168.103.2
Start Time:       Mon, 29 Sep 2025 12:05:38 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kl8b9 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-kl8b9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m23s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp to old-k8s-version-858855
Normal   Pulling    7m51s (x4 over 9m22s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     7m51s (x4 over 9m22s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m51s (x4 over 9m22s)   kubelet            Error: ErrImagePull
Warning  Failed     7m35s (x6 over 9m22s)   kubelet            Error: ImagePullBackOff
Normal   BackOff    4m20s (x20 over 9m22s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-858855 logs kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-858855 logs kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard: exit status 1 (76.495339ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-schbp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-858855 logs kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-858855
helpers_test.go:243: (dbg) docker inspect old-k8s-version-858855:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c",
	        "Created": "2025-09-29T12:04:12.432746747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 848504,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:05:11.600077832Z",
	            "FinishedAt": "2025-09-29T12:05:08.494386589Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/hosts",
	        "LogPath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c-json.log",
	        "Name": "/old-k8s-version-858855",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-858855:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-858855",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c",
	                "LowerDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-858855",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-858855/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-858855",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-858855",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-858855",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "958645a5cf70775cbc4b388fdca21a8651ae97a68e0715bac2cb7fe22819a059",
	            "SandboxKey": "/var/run/docker/netns/958645a5cf70",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-858855": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:5d:d7:35:91:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0c7082fdeaedacd6a814f0adb6da2805a722459cf4db770dd9f882e32c523fb",
	                    "EndpointID": "d518ee8a6c050656fbaaa4d067f30895a0728c93aef673bb6f46794dbaae4e7f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-858855",
	                        "d6b6af9eccb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858855 -n old-k8s-version-858855
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-858855 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-858855 logs -n 25: (1.085160985s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p calico-934155 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo cat /etc/containerd/config.toml                                                                                                                                                                                           │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo containerd config dump                                                                                                                                                                                                    │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p disable-driver-mounts-929504                                                                                                                                                                                                                 │ disable-driver-mounts-929504 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │                     │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl cat crio --no-pager                                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                   │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo crio config                                                                                                                                                                                                               │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p calico-934155                                                                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-858855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ stop    │ -p default-k8s-diff-port-414542 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-031687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p embed-certs-031687 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-031687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p no-preload-306088 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:06:36.516482  871091 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:06:36.516771  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.516782  871091 out.go:374] Setting ErrFile to fd 2...
	I0929 12:06:36.516786  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.517034  871091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:06:36.517566  871091 out.go:368] Setting JSON to false
	I0929 12:06:36.519099  871091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6540,"bootTime":1759141056,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:06:36.519186  871091 start.go:140] virtualization: kvm guest
	I0929 12:06:36.521306  871091 out.go:179] * [no-preload-306088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:06:36.522994  871091 notify.go:220] Checking for updates...
	I0929 12:06:36.523025  871091 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:06:36.524361  871091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:06:36.526212  871091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:36.527856  871091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:06:36.529330  871091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:06:36.530640  871091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:06:36.532489  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:36.532971  871091 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:06:36.557847  871091 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:06:36.557955  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.619389  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.606711858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.619500  871091 docker.go:318] overlay module found
	I0929 12:06:36.621623  871091 out.go:179] * Using the docker driver based on existing profile
	I0929 12:06:36.622958  871091 start.go:304] selected driver: docker
	I0929 12:06:36.622977  871091 start.go:924] validating driver "docker" against &{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.623069  871091 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:06:36.623939  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.681042  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.670856635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.681348  871091 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:36.681383  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:36.681440  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:36.681496  871091 start.go:348] cluster config:
	{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.683409  871091 out.go:179] * Starting "no-preload-306088" primary control-plane node in "no-preload-306088" cluster
	I0929 12:06:36.684655  871091 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:06:36.685791  871091 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:06:36.686923  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:36.687033  871091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:06:36.687071  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:36.687230  871091 cache.go:107] acquiring lock: {Name:mk458b8403b4159d98f7ca606060a1e77262160a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687232  871091 cache.go:107] acquiring lock: {Name:mkf63d99dbdfbf068ef033ecf191a655730e20a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687337  871091 cache.go:107] acquiring lock: {Name:mkd9e4857d62d04bc7d49138f7e4fb0f42e97bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687338  871091 cache.go:107] acquiring lock: {Name:mk4450faafd650ccd11a718cb9b7190d17ab5337 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687401  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 12:06:36.687412  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 12:06:36.687392  871091 cache.go:107] acquiring lock: {Name:mkbcd57035e12e42444c6b36c8f1b923cbfef46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687414  871091 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 202.746µs
	I0929 12:06:36.687421  871091 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 90.507µs
	I0929 12:06:36.687399  871091 cache.go:107] acquiring lock: {Name:mkde0ed0d421c77cb34c222a8ab10a2c13e3e1ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687387  871091 cache.go:107] acquiring lock: {Name:mk11769872d039acf11fe2041fd2e18abd2ae3a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687446  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 12:06:36.687455  871091 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 64.616µs
	I0929 12:06:36.687464  871091 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 12:06:36.687467  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 12:06:36.687476  871091 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 144.146µs
	I0929 12:06:36.687484  871091 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 12:06:36.687374  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 12:06:36.687507  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 12:06:36.687466  871091 cache.go:107] acquiring lock: {Name:mk481f9282d27c94586ac987d8a6cd5ea0f1d68c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687587  871091 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.629µs
	I0929 12:06:36.687586  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 12:06:36.687603  871091 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 12:06:36.687581  871091 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 346.559µs
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 12:06:36.687607  871091 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 276.399µs
	I0929 12:06:36.687618  871091 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 12:06:36.687620  871091 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 12:06:36.687628  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 12:06:36.687644  871091 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 230.083µs
	I0929 12:06:36.687655  871091 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 12:06:36.687663  871091 cache.go:87] Successfully saved all images to host disk.
	I0929 12:06:36.709009  871091 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:06:36.709031  871091 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:06:36.709049  871091 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:06:36.709083  871091 start.go:360] acquireMachinesLock for no-preload-306088: {Name:mk0ed8d49a268e0ff510517b50934257047b58c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.709145  871091 start.go:364] duration metric: took 44.22µs to acquireMachinesLock for "no-preload-306088"
	I0929 12:06:36.709171  871091 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:06:36.709180  871091 fix.go:54] fixHost starting: 
	I0929 12:06:36.709410  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:36.728528  871091 fix.go:112] recreateIfNeeded on no-preload-306088: state=Stopped err=<nil>
	W0929 12:06:36.728557  871091 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 12:06:33.757650  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:35.757705  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:34.860020  866509 addons.go:514] duration metric: took 2.511095137s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:34.860298  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:34.860316  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.355994  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.362405  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:35.362444  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.855983  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.860174  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 12:06:35.861328  866509 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:35.861365  866509 api_server.go:131] duration metric: took 1.00564321s to wait for apiserver health ...
	I0929 12:06:35.861375  866509 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:35.865988  866509 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:35.866018  866509 system_pods.go:61] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.866030  866509 system_pods.go:61] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.866041  866509 system_pods.go:61] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.866050  866509 system_pods.go:61] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.866055  866509 system_pods.go:61] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.866062  866509 system_pods.go:61] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.866068  866509 system_pods.go:61] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.866076  866509 system_pods.go:61] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.866083  866509 system_pods.go:74] duration metric: took 4.69699ms to wait for pod list to return data ...
	I0929 12:06:35.866093  866509 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:35.868695  866509 default_sa.go:45] found service account: "default"
	I0929 12:06:35.868715  866509 default_sa.go:55] duration metric: took 2.61564ms for default service account to be created ...
	I0929 12:06:35.868726  866509 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:35.872060  866509 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:35.872097  866509 system_pods.go:89] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.872135  866509 system_pods.go:89] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.872153  866509 system_pods.go:89] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.872164  866509 system_pods.go:89] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.872173  866509 system_pods.go:89] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.872187  866509 system_pods.go:89] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.872200  866509 system_pods.go:89] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.872215  866509 system_pods.go:89] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.872229  866509 system_pods.go:126] duration metric: took 3.496882ms to wait for k8s-apps to be running ...
	I0929 12:06:35.872241  866509 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:35.872298  866509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:35.886596  866509 system_svc.go:56] duration metric: took 14.342667ms WaitForService to wait for kubelet
	I0929 12:06:35.886631  866509 kubeadm.go:578] duration metric: took 3.537789699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:35.886658  866509 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:35.889756  866509 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:35.889792  866509 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:35.889815  866509 node_conditions.go:105] duration metric: took 3.143621ms to run NodePressure ...
	I0929 12:06:35.889827  866509 start.go:241] waiting for startup goroutines ...
	I0929 12:06:35.889846  866509 start.go:246] waiting for cluster config update ...
	I0929 12:06:35.889860  866509 start.go:255] writing updated cluster config ...
	I0929 12:06:35.890142  866509 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:35.893992  866509 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:35.898350  866509 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:37.904542  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:36.730585  871091 out.go:252] * Restarting existing docker container for "no-preload-306088" ...
	I0929 12:06:36.730671  871091 cli_runner.go:164] Run: docker start no-preload-306088
	I0929 12:06:36.986434  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:37.007128  871091 kic.go:430] container "no-preload-306088" state is running.
	I0929 12:06:37.007513  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:37.028527  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:37.028818  871091 machine.go:93] provisionDockerMachine start ...
	I0929 12:06:37.028949  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:37.047803  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:37.048197  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:37.048230  871091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:06:37.048917  871091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35296->127.0.0.1:33523: read: connection reset by peer
	I0929 12:06:40.187221  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.187251  871091 ubuntu.go:182] provisioning hostname "no-preload-306088"
	I0929 12:06:40.187303  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.206043  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.206254  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.206273  871091 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-306088 && echo "no-preload-306088" | sudo tee /etc/hostname
	I0929 12:06:40.358816  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.358923  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.377596  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.377950  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.377981  871091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-306088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-306088/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-306088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:06:40.514897  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:40.514933  871091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:06:40.514962  871091 ubuntu.go:190] setting up certificates
	I0929 12:06:40.514972  871091 provision.go:84] configureAuth start
	I0929 12:06:40.515033  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:40.534028  871091 provision.go:143] copyHostCerts
	I0929 12:06:40.534112  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:06:40.534132  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:06:40.534221  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:06:40.534378  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:06:40.534391  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:06:40.534433  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:06:40.534548  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:06:40.534559  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:06:40.534599  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:06:40.534700  871091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.no-preload-306088 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-306088]
	I0929 12:06:40.796042  871091 provision.go:177] copyRemoteCerts
	I0929 12:06:40.796100  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:06:40.796141  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.814638  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:40.913779  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:06:40.940147  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:06:40.966181  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:06:40.992149  871091 provision.go:87] duration metric: took 477.163201ms to configureAuth
	I0929 12:06:40.992177  871091 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:06:40.992354  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:40.992402  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.010729  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.011015  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.011031  871091 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:06:41.149250  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:06:41.149283  871091 ubuntu.go:71] root file system type: overlay
	I0929 12:06:41.149434  871091 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:06:41.149508  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.169382  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.169625  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.169731  871091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:06:41.327834  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:06:41.327968  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.349146  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.349454  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.349487  871091 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:06:41.500464  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:41.500497  871091 machine.go:96] duration metric: took 4.471659866s to provisionDockerMachine
	I0929 12:06:41.500512  871091 start.go:293] postStartSetup for "no-preload-306088" (driver="docker")
	I0929 12:06:41.500527  871091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:06:41.500590  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:06:41.500647  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	W0929 12:06:38.257066  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.257540  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.404187  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:42.404863  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:41.520904  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.620006  871091 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:06:41.623863  871091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:06:41.623914  871091 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:06:41.623925  871091 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:06:41.623935  871091 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:06:41.623959  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:06:41.624015  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:06:41.624111  871091 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:06:41.624227  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:06:41.634489  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:41.661187  871091 start.go:296] duration metric: took 160.643724ms for postStartSetup
	I0929 12:06:41.661275  871091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:06:41.661317  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.679286  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.773350  871091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:06:41.778053  871091 fix.go:56] duration metric: took 5.068864392s for fixHost
	I0929 12:06:41.778084  871091 start.go:83] releasing machines lock for "no-preload-306088", held for 5.068924928s
	I0929 12:06:41.778174  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:41.796247  871091 ssh_runner.go:195] Run: cat /version.json
	I0929 12:06:41.796329  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.796378  871091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:06:41.796452  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.815939  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.816193  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.990299  871091 ssh_runner.go:195] Run: systemctl --version
	I0929 12:06:41.995288  871091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:06:42.000081  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:06:42.020438  871091 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:06:42.020518  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:06:42.029627  871091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:06:42.029658  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.029697  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.029845  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.046748  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:06:42.057142  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:06:42.067569  871091 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:06:42.067621  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:06:42.078146  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.089207  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:06:42.099515  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.109953  871091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:06:42.119715  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:06:42.130148  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:06:42.140184  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:06:42.151082  871091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:06:42.161435  871091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:06:42.171100  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.243863  871091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:06:42.322789  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.322843  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.322910  871091 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:06:42.336670  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.348890  871091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:06:42.364257  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.376038  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:06:42.387832  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.405901  871091 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:06:42.409515  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:06:42.419370  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:06:42.438082  871091 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:06:42.511679  871091 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:06:42.584368  871091 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:06:42.584521  871091 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:06:42.604074  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:06:42.615691  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.684549  871091 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:06:43.531184  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:06:43.543167  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:06:43.555540  871091 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:06:43.568219  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.580095  871091 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:06:43.648390  871091 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:06:43.718653  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.787645  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:06:43.810310  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:06:43.822583  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.892062  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:06:43.972699  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.985893  871091 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:06:43.985990  871091 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:06:43.990107  871091 start.go:563] Will wait 60s for crictl version
	I0929 12:06:43.990186  871091 ssh_runner.go:195] Run: which crictl
	I0929 12:06:43.993712  871091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:06:44.032208  871091 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:06:44.032285  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.059274  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.086497  871091 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:06:44.086597  871091 cli_runner.go:164] Run: docker network inspect no-preload-306088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:06:44.103997  871091 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 12:06:44.108202  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.121433  871091 kubeadm.go:875] updating cluster {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:06:44.121548  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:44.121582  871091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:06:44.142018  871091 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 12:06:44.142049  871091 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:06:44.142057  871091 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0929 12:06:44.142162  871091 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-306088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:06:44.142214  871091 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:06:44.196459  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:44.196503  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:44.196520  871091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:06:44.196548  871091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-306088 NodeName:no-preload-306088 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:06:44.196683  871091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-306088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:06:44.196744  871091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:06:44.206772  871091 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:06:44.206838  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:06:44.216022  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 12:06:44.234761  871091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:06:44.253842  871091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 12:06:44.274561  871091 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:06:44.278469  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.290734  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.362332  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:44.386713  871091 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088 for IP: 192.168.94.2
	I0929 12:06:44.386744  871091 certs.go:194] generating shared ca certs ...
	I0929 12:06:44.386768  871091 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.386954  871091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:06:44.387011  871091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:06:44.387021  871091 certs.go:256] generating profile certs ...
	I0929 12:06:44.387100  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/client.key
	I0929 12:06:44.387155  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key.eb5a4896
	I0929 12:06:44.387190  871091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key
	I0929 12:06:44.387288  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:06:44.387320  871091 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:06:44.387329  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:06:44.387351  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:06:44.387373  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:06:44.387393  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:06:44.387440  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:44.388149  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:06:44.419158  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:06:44.448205  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:06:44.482979  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:06:44.517557  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:06:44.549867  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:06:44.576134  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:06:44.604658  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 12:06:44.631756  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:06:44.658081  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:06:44.684187  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:06:44.710650  871091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:06:44.729717  871091 ssh_runner.go:195] Run: openssl version
	I0929 12:06:44.735824  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:06:44.745812  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749234  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749293  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.756789  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:06:44.767948  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:06:44.778834  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782611  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782681  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.790603  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:06:44.800010  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:06:44.810306  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814380  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814509  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.822959  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:06:44.834110  871091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:06:44.837912  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:06:44.844692  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:06:44.851275  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:06:44.858576  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:06:44.866396  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:06:44.875491  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:06:44.883074  871091 kubeadm.go:392] StartCluster: {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:44.883211  871091 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:06:44.904790  871091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:06:44.917300  871091 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:06:44.917322  871091 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:06:44.917374  871091 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:06:44.927571  871091 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:06:44.928675  871091 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-306088" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.929373  871091 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-306088" cluster setting kubeconfig missing "no-preload-306088" context setting]
	I0929 12:06:44.930612  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.932840  871091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:06:44.943928  871091 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 12:06:44.943969  871091 kubeadm.go:593] duration metric: took 26.639509ms to restartPrimaryControlPlane
	I0929 12:06:44.943982  871091 kubeadm.go:394] duration metric: took 60.918658ms to StartCluster
	I0929 12:06:44.944003  871091 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.944082  871091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.946478  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.946713  871091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:06:44.946792  871091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:06:44.946942  871091 addons.go:69] Setting storage-provisioner=true in profile "no-preload-306088"
	I0929 12:06:44.946950  871091 addons.go:69] Setting default-storageclass=true in profile "no-preload-306088"
	I0929 12:06:44.946967  871091 addons.go:238] Setting addon storage-provisioner=true in "no-preload-306088"
	I0929 12:06:44.946975  871091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-306088"
	I0929 12:06:44.946990  871091 addons.go:69] Setting metrics-server=true in profile "no-preload-306088"
	I0929 12:06:44.947004  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:44.947018  871091 addons.go:238] Setting addon metrics-server=true in "no-preload-306088"
	I0929 12:06:44.947007  871091 addons.go:69] Setting dashboard=true in profile "no-preload-306088"
	W0929 12:06:44.947027  871091 addons.go:247] addon metrics-server should already be in state true
	I0929 12:06:44.947041  871091 addons.go:238] Setting addon dashboard=true in "no-preload-306088"
	W0929 12:06:44.946976  871091 addons.go:247] addon storage-provisioner should already be in state true
	W0929 12:06:44.947052  871091 addons.go:247] addon dashboard should already be in state true
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947081  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947415  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947557  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947574  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947710  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.949123  871091 out.go:179] * Verifying Kubernetes components...
	I0929 12:06:44.951560  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.983162  871091 addons.go:238] Setting addon default-storageclass=true in "no-preload-306088"
	W0929 12:06:44.983184  871091 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:06:44.983259  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.983409  871091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:06:44.983471  871091 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:06:44.984010  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.984739  871091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:44.984759  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:06:44.984810  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.985006  871091 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:06:44.985094  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:06:44.985115  871091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:06:44.985173  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.989553  871091 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:06:44.990700  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:06:44.990720  871091 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:06:44.990787  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.013082  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.023016  871091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.023045  871091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:06:45.023112  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.023478  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.027093  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.046756  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.088649  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:45.131986  871091 node_ready.go:35] waiting up to 6m0s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:45.142439  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.156825  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:06:45.156854  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:06:45.157091  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:06:45.157113  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:06:45.171641  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.191370  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:06:45.191407  871091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:06:45.191600  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:06:45.191622  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:06:45.225277  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:06:45.225316  871091 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:06:45.227138  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.227166  871091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0929 12:06:45.240720  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.240807  871091 retry.go:31] will retry after 255.439226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.253570  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.253730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:06:45.253752  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:06:45.256592  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.256642  871091 retry.go:31] will retry after 176.530584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.284730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:06:45.284766  871091 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:06:45.315598  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:06:45.315629  871091 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 12:06:45.337290  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.337352  871091 retry.go:31] will retry after 216.448516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.341267  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:06:45.341293  871091 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 12:06:45.367418  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:06:45.367447  871091 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:06:45.394525  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.394579  871091 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:06:45.428230  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.433674  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.496374  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.554373  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 12:06:42.757687  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:45.257903  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:47.043268  871091 node_ready.go:49] node "no-preload-306088" is "Ready"
	I0929 12:06:47.043313  871091 node_ready.go:38] duration metric: took 1.911288329s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:47.043336  871091 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:06:47.043393  871091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:06:47.559973  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.131688912s)
	I0929 12:06:47.560210  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.126485829s)
	I0929 12:06:47.561634  871091 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-306088 addons enable metrics-server
	
	I0929 12:06:47.677198  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.180776144s)
	I0929 12:06:47.677264  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.122845465s)
	I0929 12:06:47.677276  871091 api_server.go:72] duration metric: took 2.730527098s to wait for apiserver process to appear ...
	I0929 12:06:47.677284  871091 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:06:47.677301  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:47.677300  871091 addons.go:479] Verifying addon metrics-server=true in "no-preload-306088"
	I0929 12:06:47.679081  871091 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	W0929 12:06:44.905162  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:47.405106  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:47.680000  871091 addons.go:514] duration metric: took 2.733215653s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:47.681720  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:47.681742  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.178112  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.184346  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:48.184379  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.678093  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.683059  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 12:06:48.684122  871091 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:48.684148  871091 api_server.go:131] duration metric: took 1.006856952s to wait for apiserver health ...
	I0929 12:06:48.684159  871091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:48.686922  871091 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:48.686951  871091 system_pods.go:61] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.686958  871091 system_pods.go:61] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.686972  871091 system_pods.go:61] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.686984  871091 system_pods.go:61] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.686999  871091 system_pods.go:61] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:48.687008  871091 system_pods.go:61] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.687018  871091 system_pods.go:61] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.687024  871091 system_pods.go:61] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.687035  871091 system_pods.go:74] duration metric: took 2.869523ms to wait for pod list to return data ...
	I0929 12:06:48.687047  871091 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:48.690705  871091 default_sa.go:45] found service account: "default"
	I0929 12:06:48.690730  871091 default_sa.go:55] duration metric: took 3.675534ms for default service account to be created ...
	I0929 12:06:48.690740  871091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:48.693650  871091 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:48.693684  871091 system_pods.go:89] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.693693  871091 system_pods.go:89] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.693715  871091 system_pods.go:89] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.693725  871091 system_pods.go:89] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.693733  871091 system_pods.go:89] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running
	I0929 12:06:48.693738  871091 system_pods.go:89] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.693743  871091 system_pods.go:89] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.693753  871091 system_pods.go:89] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.693770  871091 system_pods.go:126] duration metric: took 3.022951ms to wait for k8s-apps to be running ...
	I0929 12:06:48.693778  871091 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:48.693838  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:48.706595  871091 system_svc.go:56] duration metric: took 12.805298ms WaitForService to wait for kubelet
	I0929 12:06:48.706622  871091 kubeadm.go:578] duration metric: took 3.759872419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:48.706643  871091 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:48.709282  871091 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:48.709305  871091 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:48.709317  871091 node_conditions.go:105] duration metric: took 2.669783ms to run NodePressure ...
	I0929 12:06:48.709327  871091 start.go:241] waiting for startup goroutines ...
	I0929 12:06:48.709334  871091 start.go:246] waiting for cluster config update ...
	I0929 12:06:48.709345  871091 start.go:255] writing updated cluster config ...
	I0929 12:06:48.709631  871091 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:48.713435  871091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:48.716857  871091 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:50.722059  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:47.756924  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.757051  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.903749  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:51.904179  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:52.722481  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:55.222976  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:52.257245  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:54.757176  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:56.756246  861376 pod_ready.go:94] pod "coredns-66bc5c9577-zqqdn" is "Ready"
	I0929 12:06:56.756280  861376 pod_ready.go:86] duration metric: took 38.005267391s for pod "coredns-66bc5c9577-zqqdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.761541  861376 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.765343  861376 pod_ready.go:94] pod "etcd-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.765363  861376 pod_ready.go:86] duration metric: took 3.798035ms for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.767218  861376 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.770588  861376 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.770606  861376 pod_ready.go:86] duration metric: took 3.370627ms for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.772342  861376 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.955016  861376 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.955044  861376 pod_ready.go:86] duration metric: took 182.679374ms for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.155127  861376 pod_ready.go:83] waiting for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.555193  861376 pod_ready.go:94] pod "kube-proxy-bspjk" is "Ready"
	I0929 12:06:57.555220  861376 pod_ready.go:86] duration metric: took 400.064967ms for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.755450  861376 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155379  861376 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:58.155405  861376 pod_ready.go:86] duration metric: took 399.927452ms for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155417  861376 pod_ready.go:40] duration metric: took 39.40795228s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:58.201296  861376 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:06:58.203132  861376 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-414542" cluster and "default" namespace by default
	W0929 12:06:53.904220  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:56.404228  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:57.722276  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:00.222038  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:58.904138  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:00.904689  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:03.404607  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:02.722573  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.222722  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.903327  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.903942  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.722224  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.722687  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.904282  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:07:10.403750  866509 pod_ready.go:94] pod "coredns-66bc5c9577-h49hh" is "Ready"
	I0929 12:07:10.403779  866509 pod_ready.go:86] duration metric: took 34.505404913s for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.406142  866509 pod_ready.go:83] waiting for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.409848  866509 pod_ready.go:94] pod "etcd-embed-certs-031687" is "Ready"
	I0929 12:07:10.409884  866509 pod_ready.go:86] duration metric: took 3.705005ms for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.411799  866509 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.415853  866509 pod_ready.go:94] pod "kube-apiserver-embed-certs-031687" is "Ready"
	I0929 12:07:10.415901  866509 pod_ready.go:86] duration metric: took 4.068426ms for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.417734  866509 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.601598  866509 pod_ready.go:94] pod "kube-controller-manager-embed-certs-031687" is "Ready"
	I0929 12:07:10.601629  866509 pod_ready.go:86] duration metric: took 183.870372ms for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.801642  866509 pod_ready.go:83] waiting for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.201791  866509 pod_ready.go:94] pod "kube-proxy-8lx97" is "Ready"
	I0929 12:07:11.201815  866509 pod_ready.go:86] duration metric: took 400.146465ms for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.402190  866509 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802461  866509 pod_ready.go:94] pod "kube-scheduler-embed-certs-031687" is "Ready"
	I0929 12:07:11.802499  866509 pod_ready.go:86] duration metric: took 400.277946ms for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802515  866509 pod_ready.go:40] duration metric: took 35.908487233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:11.853382  866509 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:11.856798  866509 out.go:179] * Done! kubectl is now configured to use "embed-certs-031687" cluster and "default" namespace by default
	W0929 12:07:12.221602  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:14.221842  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:16.222454  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:18.722820  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:20.725000  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	I0929 12:07:21.222494  871091 pod_ready.go:94] pod "coredns-66bc5c9577-llrxw" is "Ready"
	I0929 12:07:21.222527  871091 pod_ready.go:86] duration metric: took 32.505636564s for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.225025  871091 pod_ready.go:83] waiting for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.228512  871091 pod_ready.go:94] pod "etcd-no-preload-306088" is "Ready"
	I0929 12:07:21.228529  871091 pod_ready.go:86] duration metric: took 3.482765ms for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.230262  871091 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.233598  871091 pod_ready.go:94] pod "kube-apiserver-no-preload-306088" is "Ready"
	I0929 12:07:21.233622  871091 pod_ready.go:86] duration metric: took 3.343035ms for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.235393  871091 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.421017  871091 pod_ready.go:94] pod "kube-controller-manager-no-preload-306088" is "Ready"
	I0929 12:07:21.421047  871091 pod_ready.go:86] duration metric: took 185.636666ms for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.621421  871091 pod_ready.go:83] waiting for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.020579  871091 pod_ready.go:94] pod "kube-proxy-79hf6" is "Ready"
	I0929 12:07:22.020611  871091 pod_ready.go:86] duration metric: took 399.163924ms for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.220586  871091 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620444  871091 pod_ready.go:94] pod "kube-scheduler-no-preload-306088" is "Ready"
	I0929 12:07:22.620469  871091 pod_ready.go:86] duration metric: took 399.857006ms for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620481  871091 pod_ready.go:40] duration metric: took 33.907023232s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:22.667955  871091 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:22.669694  871091 out.go:179] * Done! kubectl is now configured to use "no-preload-306088" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 12:07:03 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:07:03.605159390Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:07:03 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:07:03.605266976Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:07:03 old-k8s-version-858855 cri-dockerd[1109]: time="2025-09-29T12:07:03Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:07:10 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:07:10.519679677Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:07:10 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:07:10.553017342Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:08:24 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:24.567510614Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:08:24 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:24.627170890Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:08:24 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:24.627271309Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:08:24 old-k8s-version-858855 cri-dockerd[1109]: time="2025-09-29T12:08:24Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:08:30 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:30.429563900Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:08:30 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:30.429599448Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:08:30 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:30.431736216Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:08:30 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:30.431766504Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:08:37 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:37.523811901Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:08:37 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:08:37.554810527Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:11:13 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:13.569199899Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:11:13 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:13.620783097Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:11:13 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:13.620907618Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:11:13 old-k8s-version-858855 cri-dockerd[1109]: time="2025-09-29T12:11:13Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:11:15 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:15.639449715Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:11:15 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:15.639483465Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:11:15 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:15.641446600Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:11:15 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:15.641476269Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:11:20 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:20.523109595Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:11:20 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:20.559671206Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	942edd51c699f       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       2                   07bffe4f8ab31       storage-provisioner
	40d1936f7a182       ead0a4a53df89                                                                                         9 minutes ago       Running             coredns                   1                   475c4fa557701       coredns-5dd5756b68-xbvjd
	c1e6b6259f0e6       56cc512116c8f                                                                                         9 minutes ago       Running             busybox                   1                   139c0966fc4c3       busybox
	8924ee529df34       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       1                   07bffe4f8ab31       storage-provisioner
	22ba39d2ae5a3       ea1030da44aa1                                                                                         9 minutes ago       Running             kube-proxy                1                   3ddb6636f3ce5       kube-proxy-9w9zt
	ee084712c1b8e       4be79c38a4bab                                                                                         9 minutes ago       Running             kube-controller-manager   1                   c15364d04af73       kube-controller-manager-old-k8s-version-858855
	e1abbb3530f23       73deb9a3f7025                                                                                         9 minutes ago       Running             etcd                      1                   8d8b7b4c01209       etcd-old-k8s-version-858855
	566c90e1275a8       bb5e0dde9054c                                                                                         9 minutes ago       Running             kube-apiserver            1                   7e7ee9522cbcb       kube-apiserver-old-k8s-version-858855
	f621e5a4db271       f6f496300a2ae                                                                                         9 minutes ago       Running             kube-scheduler            1                   238b013375b50       kube-scheduler-old-k8s-version-858855
	72d289f470fa3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              busybox                   0                   785221f4de24e       busybox
	ca05612b0c1a1       ead0a4a53df89                                                                                         10 minutes ago      Exited              coredns                   0                   af4f4f5a90e27       coredns-5dd5756b68-xbvjd
	d3482105a1e11       ea1030da44aa1                                                                                         10 minutes ago      Exited              kube-proxy                0                   52f0f8d9723f0       kube-proxy-9w9zt
	f16a413904c89       bb5e0dde9054c                                                                                         10 minutes ago      Exited              kube-apiserver            0                   c690998fe1b7f       kube-apiserver-old-k8s-version-858855
	d89f29914e486       73deb9a3f7025                                                                                         10 minutes ago      Exited              etcd                      0                   e464438e3531d       etcd-old-k8s-version-858855
	b657e8edad2ba       4be79c38a4bab                                                                                         10 minutes ago      Exited              kube-controller-manager   0                   3f6869d6bebc9       kube-controller-manager-old-k8s-version-858855
	7ec694630b5d1       f6f496300a2ae                                                                                         10 minutes ago      Exited              kube-scheduler            0                   64f650385a37c       kube-scheduler-old-k8s-version-858855
	
	
	==> coredns [40d1936f7a18] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49446 - 6398 "HINFO IN 2432455842848361899.6694524293727266407. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016906895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [ca05612b0c1a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-858855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-858855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=old-k8s-version-858855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_04_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:04:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-858855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:14:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:10:31 +0000   Mon, 29 Sep 2025 12:04:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:10:31 +0000   Mon, 29 Sep 2025 12:04:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:10:31 +0000   Mon, 29 Sep 2025 12:04:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:10:31 +0000   Mon, 29 Sep 2025 12:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-858855
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 25c4315876594b7ebc42d99e6e882c81
	  System UUID:                0d302006-e090-41d5-9094-71b88b7d0779
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-5dd5756b68-xbvjd                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-old-k8s-version-858855                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-old-k8s-version-858855             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-858855    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-9w9zt                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-858855             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-cqfgh                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dkknq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-schbp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m35s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-858855 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-858855 event: Registered Node old-k8s-version-858855 in Controller
	  Normal  Starting                 9m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m41s (x9 over 9m41s)  kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m41s (x7 over 9m41s)  kubelet          Node old-k8s-version-858855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m41s (x7 over 9m41s)  kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m24s                  node-controller  Node old-k8s-version-858855 event: Registered Node old-k8s-version-858855 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e ea 9d d2 75 10 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 02 ed 9c 9f 01 b3 08 06
	[  +7.676274] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	
	
	==> etcd [d89f29914e48] <==
	{"level":"info","ts":"2025-09-29T12:04:26.215031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.215047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.215059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.21507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.21617Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-858855 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T12:04:26.216205Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:04:26.216258Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:04:26.216295Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.217204Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T12:04:26.217227Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T12:04:26.217926Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.218088Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.218119Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.218861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T12:04:26.219415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-09-29T12:04:58.212733Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:04:58.212821Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-858855","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"warn","ts":"2025-09-29T12:04:58.212955Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:04:58.215977Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:04:58.257329Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:04:58.257394Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:04:58.259441Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2025-09-29T12:04:58.261427Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:04:58.261548Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:04:58.261561Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-858855","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> etcd [e1abbb3530f2] <==
	{"level":"info","ts":"2025-09-29T12:05:22.898792Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-29T12:05:22.898802Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-29T12:05:22.89903Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-09-29T12:05:22.899122Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-09-29T12:05:22.89923Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:05:22.89931Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:05:22.90111Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T12:05:22.90135Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:05:22.901806Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:05:22.901385Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T12:05:22.901406Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T12:05:24.586782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T12:05:24.586834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T12:05:24.586893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-29T12:05:24.586915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.586924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.586933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.586949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.588481Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-858855 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T12:05:24.588515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:05:24.588509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:05:24.588727Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T12:05:24.588769Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T12:05:24.589742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T12:05:24.589755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	
	
	==> kernel <==
	 12:15:02 up  1:57,  0 users,  load average: 0.48, 1.36, 2.31
	Linux old-k8s-version-858855 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [566c90e1275a] <==
	E0929 12:10:26.718719       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 12:10:26.719713       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:11:25.650830       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.91.25:443: connect: connection refused
	I0929 12:11:25.650853       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 12:11:26.719683       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:11:26.719735       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 12:11:26.719745       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:11:26.719918       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:11:26.720016       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 12:11:26.721815       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:12:25.650774       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.91.25:443: connect: connection refused
	I0929 12:12:25.650801       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 12:13:25.650931       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.91.25:443: connect: connection refused
	I0929 12:13:25.650953       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 12:13:26.720361       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:13:26.720400       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 12:13:26.720408       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:13:26.722538       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:13:26.722600       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 12:13:26.722610       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:14:25.651210       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.91.25:443: connect: connection refused
	I0929 12:14:25.651238       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-apiserver [f16a413904c8] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:05:08.005616       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:05:08.095162       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:05:08.125460       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b657e8edad2b] <==
	I0929 12:04:43.417902       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9w9zt"
	I0929 12:04:43.500870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-x2c2w"
	I0929 12:04:43.515049       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xbvjd"
	I0929 12:04:43.556340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="409.708028ms"
	I0929 12:04:43.576313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.904674ms"
	I0929 12:04:43.576679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.403µs"
	I0929 12:04:43.579390       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.737µs"
	I0929 12:04:43.604294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="174.21µs"
	I0929 12:04:44.452823       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0929 12:04:44.468181       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-x2c2w"
	I0929 12:04:44.480092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.29141ms"
	I0929 12:04:44.490584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.424546ms"
	I0929 12:04:44.490697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.806µs"
	I0929 12:04:45.045836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.847µs"
	I0929 12:04:45.073966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.289563ms"
	I0929 12:04:45.074158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.243µs"
	I0929 12:04:50.291943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.528µs"
	I0929 12:04:51.152147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="615.199µs"
	I0929 12:04:51.167916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="161.104µs"
	I0929 12:04:51.172546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.954µs"
	I0929 12:04:57.530997       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0929 12:04:57.545034       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-cqfgh"
	I0929 12:04:57.560078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="28.908098ms"
	I0929 12:04:57.605438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="45.153823ms"
	I0929 12:04:57.606099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="293.934µs"
	
	
	==> kube-controller-manager [ee084712c1b8] <==
	I0929 12:10:08.723532       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:10:38.291131       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:10:38.731496       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:11:08.295743       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:11:08.739564       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 12:11:26.513455       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="133.659µs"
	I0929 12:11:29.515864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="117.727µs"
	I0929 12:11:32.512196       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="97.79µs"
	I0929 12:11:37.514252       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="112.138µs"
	E0929 12:11:38.300441       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:11:38.747328       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 12:11:43.514009       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="129µs"
	I0929 12:11:46.513431       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="116.856µs"
	E0929 12:12:08.304366       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:12:08.753953       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:12:38.308967       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:12:38.761708       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:13:08.313412       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:13:08.768684       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:13:38.318024       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:13:38.775804       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:14:08.321895       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:14:08.783342       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:14:38.326345       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:14:38.790567       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [22ba39d2ae5a] <==
	I0929 12:05:27.342042       1 server_others.go:69] "Using iptables proxy"
	I0929 12:05:27.356586       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0929 12:05:27.385742       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:27.389194       1 server_others.go:152] "Using iptables Proxier"
	I0929 12:05:27.389241       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 12:05:27.389253       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 12:05:27.389320       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 12:05:27.389694       1 server.go:846] "Version info" version="v1.28.0"
	I0929 12:05:27.389718       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:27.392202       1 config.go:97] "Starting endpoint slice config controller"
	I0929 12:05:27.392241       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 12:05:27.392381       1 config.go:315] "Starting node config controller"
	I0929 12:05:27.392402       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 12:05:27.392954       1 config.go:188] "Starting service config controller"
	I0929 12:05:27.393018       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 12:05:27.493045       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 12:05:27.493131       1 shared_informer.go:318] Caches are synced for node config
	I0929 12:05:27.494351       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [d3482105a1e1] <==
	I0929 12:04:44.338727       1 server_others.go:69] "Using iptables proxy"
	I0929 12:04:44.358566       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0929 12:04:44.409997       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:04:44.418531       1 server_others.go:152] "Using iptables Proxier"
	I0929 12:04:44.418581       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 12:04:44.418591       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 12:04:44.418622       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 12:04:44.419106       1 server.go:846] "Version info" version="v1.28.0"
	I0929 12:04:44.419124       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:04:44.420999       1 config.go:188] "Starting service config controller"
	I0929 12:04:44.423323       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 12:04:44.421631       1 config.go:97] "Starting endpoint slice config controller"
	I0929 12:04:44.423863       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 12:04:44.422476       1 config.go:315] "Starting node config controller"
	I0929 12:04:44.424551       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 12:04:44.524998       1 shared_informer.go:318] Caches are synced for service config
	I0929 12:04:44.527171       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 12:04:44.527656       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7ec694630b5d] <==
	W0929 12:04:27.671446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 12:04:27.672437       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0929 12:04:27.671522       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 12:04:27.672464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 12:04:27.672043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0929 12:04:27.672559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0929 12:04:28.489560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0929 12:04:28.489601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0929 12:04:28.632661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0929 12:04:28.632706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0929 12:04:28.652576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0929 12:04:28.652619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0929 12:04:28.691939       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0929 12:04:28.691976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0929 12:04:28.713737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 12:04:28.713778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0929 12:04:28.787145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0929 12:04:28.787174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0929 12:04:28.888146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 12:04:28.888189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0929 12:04:29.141219       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0929 12:04:29.141260       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0929 12:04:32.264978       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 12:04:58.226157       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0929 12:04:58.226292       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f621e5a4db27] <==
	I0929 12:05:23.602302       1 serving.go:348] Generated self-signed cert in-memory
	W0929 12:05:25.725035       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:05:25.725100       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0929 12:05:25.725117       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:05:25.725128       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:05:25.760715       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 12:05:25.762930       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:25.766773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:05:25.767313       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 12:05:25.767818       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 12:05:25.767906       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 12:05:25.868184       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 12:13:17 old-k8s-version-858855 kubelet[1330]: E0929 12:13:17.504819    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:13:23 old-k8s-version-858855 kubelet[1330]: E0929 12:13:23.504962    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:13:24 old-k8s-version-858855 kubelet[1330]: E0929 12:13:24.504049    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:13:28 old-k8s-version-858855 kubelet[1330]: E0929 12:13:28.504057    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:13:35 old-k8s-version-858855 kubelet[1330]: E0929 12:13:35.504743    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:13:38 old-k8s-version-858855 kubelet[1330]: E0929 12:13:38.504559    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:13:41 old-k8s-version-858855 kubelet[1330]: E0929 12:13:41.504461    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:13:46 old-k8s-version-858855 kubelet[1330]: E0929 12:13:46.504224    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:13:50 old-k8s-version-858855 kubelet[1330]: E0929 12:13:50.504815    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:13:56 old-k8s-version-858855 kubelet[1330]: E0929 12:13:56.504532    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:13:59 old-k8s-version-858855 kubelet[1330]: E0929 12:13:59.504048    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:14:05 old-k8s-version-858855 kubelet[1330]: E0929 12:14:05.505844    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:14:08 old-k8s-version-858855 kubelet[1330]: E0929 12:14:08.504295    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:14:14 old-k8s-version-858855 kubelet[1330]: E0929 12:14:14.504494    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:14:19 old-k8s-version-858855 kubelet[1330]: E0929 12:14:19.504088    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:14:21 old-k8s-version-858855 kubelet[1330]: E0929 12:14:21.504450    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:14:28 old-k8s-version-858855 kubelet[1330]: E0929 12:14:28.504737    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:14:32 old-k8s-version-858855 kubelet[1330]: E0929 12:14:32.504185    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:14:34 old-k8s-version-858855 kubelet[1330]: E0929 12:14:34.504715    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:14:43 old-k8s-version-858855 kubelet[1330]: E0929 12:14:43.504117    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:14:43 old-k8s-version-858855 kubelet[1330]: E0929 12:14:43.504183    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:14:45 old-k8s-version-858855 kubelet[1330]: E0929 12:14:45.504444    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:14:54 old-k8s-version-858855 kubelet[1330]: E0929 12:14:54.503854    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:14:56 old-k8s-version-858855 kubelet[1330]: E0929 12:14:56.504170    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:14:56 old-k8s-version-858855 kubelet[1330]: E0929 12:14:56.504200    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	
	
	==> storage-provisioner [8924ee529df3] <==
	I0929 12:05:27.291709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:05:57.296307       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [942edd51c699] <==
	I0929 12:06:10.611405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 12:06:10.619390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 12:06:10.619479       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 12:06:28.017694       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 12:06:28.017829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2b06835-e235-47b6-8894-f950d4aafc39", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-858855_400a105f-e149-4c4a-9a60-7ce30b0d787c became leader
	I0929 12:06:28.017899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-858855_400a105f-e149-4c4a-9a60-7ce30b0d787c!
	I0929 12:06:28.118164       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-858855_400a105f-e149-4c4a-9a60-7ce30b0d787c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858855 -n old-k8s-version-858855
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-858855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-858855 describe pod metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-858855 describe pod metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp: exit status 1 (64.687892ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cqfgh" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-dkknq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-schbp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-858855 describe pod metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cxjff" [3e3d7969-3840-4382-aed3-5a0078b5c059] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:07:03.042120  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:03.048523  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:03.059969  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:03.081381  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:03.122791  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:03.204340  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:03.366342  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:03.687756  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:04.329662  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:05.611072  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:08.172503  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:15:58.860965018 +0000 UTC m=+3840.904777111
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 describe po kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-414542 describe po kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-cxjff
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-414542/192.168.85.2
Start Time:       Mon, 29 Sep 2025 12:06:21 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f88dc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-f88dc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m37s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff to default-k8s-diff-port-414542
Normal   Pulling    6m34s (x5 over 9m37s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m33s (x5 over 9m37s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m33s (x5 over 9m37s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m31s (x21 over 9m37s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m31s (x21 over 9m37s)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 logs kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-414542 logs kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard: exit status 1 (71.742346ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-cxjff" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-414542 logs kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-414542
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-414542:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b",
	        "Created": "2025-09-29T12:05:11.098346797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 861575,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:06:07.105136284Z",
	            "FinishedAt": "2025-09-29T12:06:06.313379601Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/hostname",
	        "HostsPath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/hosts",
	        "LogPath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b-json.log",
	        "Name": "/default-k8s-diff-port-414542",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-414542:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-414542",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b",
	                "LowerDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-414542",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-414542/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-414542",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-414542",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-414542",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30d4404fe9a53d936209a977607824804f2d5865ab2131bb99a438428657a9ef",
	            "SandboxKey": "/var/run/docker/netns/30d4404fe9a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-414542": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:67:52:2b:51:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "910e288d2f8f50abd1ba56a42ed95d1cdfe96eec6c96b70b9353f7a3dcc003fa",
	                    "EndpointID": "21d0a1f2de01524b0bd3ec6cee0d257c171801b2904b6278c3997f51c27d6f83",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-414542",
	                        "3994f7f7ffb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414542 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-414542 logs -n 25: (1.061020176s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p calico-934155 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo cat /etc/containerd/config.toml                                                                                                                                                                                           │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo containerd config dump                                                                                                                                                                                                    │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p disable-driver-mounts-929504                                                                                                                                                                                                                 │ disable-driver-mounts-929504 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │                     │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl cat crio --no-pager                                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                   │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo crio config                                                                                                                                                                                                               │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p calico-934155                                                                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-858855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ stop    │ -p default-k8s-diff-port-414542 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-031687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p embed-certs-031687 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-031687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p no-preload-306088 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:06:36.516482  871091 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:06:36.516771  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.516782  871091 out.go:374] Setting ErrFile to fd 2...
	I0929 12:06:36.516786  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.517034  871091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:06:36.517566  871091 out.go:368] Setting JSON to false
	I0929 12:06:36.519099  871091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6540,"bootTime":1759141056,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:06:36.519186  871091 start.go:140] virtualization: kvm guest
	I0929 12:06:36.521306  871091 out.go:179] * [no-preload-306088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:06:36.522994  871091 notify.go:220] Checking for updates...
	I0929 12:06:36.523025  871091 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:06:36.524361  871091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:06:36.526212  871091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:36.527856  871091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:06:36.529330  871091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:06:36.530640  871091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:06:36.532489  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:36.532971  871091 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:06:36.557847  871091 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:06:36.557955  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.619389  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.606711858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.619500  871091 docker.go:318] overlay module found
	I0929 12:06:36.621623  871091 out.go:179] * Using the docker driver based on existing profile
	I0929 12:06:36.622958  871091 start.go:304] selected driver: docker
	I0929 12:06:36.622977  871091 start.go:924] validating driver "docker" against &{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.623069  871091 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:06:36.623939  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.681042  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.670856635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.681348  871091 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:36.681383  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:36.681440  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:36.681496  871091 start.go:348] cluster config:
	{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.683409  871091 out.go:179] * Starting "no-preload-306088" primary control-plane node in "no-preload-306088" cluster
	I0929 12:06:36.684655  871091 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:06:36.685791  871091 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:06:36.686923  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:36.687033  871091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:06:36.687071  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:36.687230  871091 cache.go:107] acquiring lock: {Name:mk458b8403b4159d98f7ca606060a1e77262160a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687232  871091 cache.go:107] acquiring lock: {Name:mkf63d99dbdfbf068ef033ecf191a655730e20a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687337  871091 cache.go:107] acquiring lock: {Name:mkd9e4857d62d04bc7d49138f7e4fb0f42e97bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687338  871091 cache.go:107] acquiring lock: {Name:mk4450faafd650ccd11a718cb9b7190d17ab5337 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687401  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 12:06:36.687412  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 12:06:36.687392  871091 cache.go:107] acquiring lock: {Name:mkbcd57035e12e42444c6b36c8f1b923cbfef46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687414  871091 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 202.746µs
	I0929 12:06:36.687421  871091 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 90.507µs
	I0929 12:06:36.687399  871091 cache.go:107] acquiring lock: {Name:mkde0ed0d421c77cb34c222a8ab10a2c13e3e1ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687387  871091 cache.go:107] acquiring lock: {Name:mk11769872d039acf11fe2041fd2e18abd2ae3a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687446  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 12:06:36.687455  871091 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 64.616µs
	I0929 12:06:36.687464  871091 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 12:06:36.687467  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 12:06:36.687476  871091 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 144.146µs
	I0929 12:06:36.687484  871091 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 12:06:36.687374  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 12:06:36.687507  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 12:06:36.687466  871091 cache.go:107] acquiring lock: {Name:mk481f9282d27c94586ac987d8a6cd5ea0f1d68c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687587  871091 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.629µs
	I0929 12:06:36.687586  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 12:06:36.687603  871091 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 12:06:36.687581  871091 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 346.559µs
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 12:06:36.687607  871091 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 276.399µs
	I0929 12:06:36.687618  871091 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 12:06:36.687620  871091 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 12:06:36.687628  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 12:06:36.687644  871091 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 230.083µs
	I0929 12:06:36.687655  871091 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 12:06:36.687663  871091 cache.go:87] Successfully saved all images to host disk.
	I0929 12:06:36.709009  871091 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:06:36.709031  871091 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:06:36.709049  871091 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:06:36.709083  871091 start.go:360] acquireMachinesLock for no-preload-306088: {Name:mk0ed8d49a268e0ff510517b50934257047b58c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.709145  871091 start.go:364] duration metric: took 44.22µs to acquireMachinesLock for "no-preload-306088"
	I0929 12:06:36.709171  871091 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:06:36.709180  871091 fix.go:54] fixHost starting: 
	I0929 12:06:36.709410  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:36.728528  871091 fix.go:112] recreateIfNeeded on no-preload-306088: state=Stopped err=<nil>
	W0929 12:06:36.728557  871091 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 12:06:33.757650  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:35.757705  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:34.860020  866509 addons.go:514] duration metric: took 2.511095137s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:34.860298  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:34.860316  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.355994  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.362405  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:35.362444  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.855983  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.860174  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 12:06:35.861328  866509 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:35.861365  866509 api_server.go:131] duration metric: took 1.00564321s to wait for apiserver health ...
	I0929 12:06:35.861375  866509 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:35.865988  866509 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:35.866018  866509 system_pods.go:61] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.866030  866509 system_pods.go:61] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.866041  866509 system_pods.go:61] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.866050  866509 system_pods.go:61] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.866055  866509 system_pods.go:61] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.866062  866509 system_pods.go:61] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.866068  866509 system_pods.go:61] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.866076  866509 system_pods.go:61] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.866083  866509 system_pods.go:74] duration metric: took 4.69699ms to wait for pod list to return data ...
	I0929 12:06:35.866093  866509 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:35.868695  866509 default_sa.go:45] found service account: "default"
	I0929 12:06:35.868715  866509 default_sa.go:55] duration metric: took 2.61564ms for default service account to be created ...
	I0929 12:06:35.868726  866509 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:35.872060  866509 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:35.872097  866509 system_pods.go:89] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.872135  866509 system_pods.go:89] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.872153  866509 system_pods.go:89] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.872164  866509 system_pods.go:89] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.872173  866509 system_pods.go:89] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.872187  866509 system_pods.go:89] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.872200  866509 system_pods.go:89] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.872215  866509 system_pods.go:89] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.872229  866509 system_pods.go:126] duration metric: took 3.496882ms to wait for k8s-apps to be running ...
	I0929 12:06:35.872241  866509 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:35.872298  866509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:35.886596  866509 system_svc.go:56] duration metric: took 14.342667ms WaitForService to wait for kubelet
	I0929 12:06:35.886631  866509 kubeadm.go:578] duration metric: took 3.537789699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:35.886658  866509 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:35.889756  866509 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:35.889792  866509 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:35.889815  866509 node_conditions.go:105] duration metric: took 3.143621ms to run NodePressure ...
	I0929 12:06:35.889827  866509 start.go:241] waiting for startup goroutines ...
	I0929 12:06:35.889846  866509 start.go:246] waiting for cluster config update ...
	I0929 12:06:35.889860  866509 start.go:255] writing updated cluster config ...
	I0929 12:06:35.890142  866509 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:35.893992  866509 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:35.898350  866509 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:37.904542  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:36.730585  871091 out.go:252] * Restarting existing docker container for "no-preload-306088" ...
	I0929 12:06:36.730671  871091 cli_runner.go:164] Run: docker start no-preload-306088
	I0929 12:06:36.986434  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:37.007128  871091 kic.go:430] container "no-preload-306088" state is running.
	I0929 12:06:37.007513  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:37.028527  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:37.028818  871091 machine.go:93] provisionDockerMachine start ...
	I0929 12:06:37.028949  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:37.047803  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:37.048197  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:37.048230  871091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:06:37.048917  871091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35296->127.0.0.1:33523: read: connection reset by peer
	I0929 12:06:40.187221  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.187251  871091 ubuntu.go:182] provisioning hostname "no-preload-306088"
	I0929 12:06:40.187303  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.206043  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.206254  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.206273  871091 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-306088 && echo "no-preload-306088" | sudo tee /etc/hostname
	I0929 12:06:40.358816  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.358923  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.377596  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.377950  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.377981  871091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-306088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-306088/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-306088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:06:40.514897  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:40.514933  871091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:06:40.514962  871091 ubuntu.go:190] setting up certificates
	I0929 12:06:40.514972  871091 provision.go:84] configureAuth start
	I0929 12:06:40.515033  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:40.534028  871091 provision.go:143] copyHostCerts
	I0929 12:06:40.534112  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:06:40.534132  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:06:40.534221  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:06:40.534378  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:06:40.534391  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:06:40.534433  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:06:40.534548  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:06:40.534559  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:06:40.534599  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:06:40.534700  871091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.no-preload-306088 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-306088]
	I0929 12:06:40.796042  871091 provision.go:177] copyRemoteCerts
	I0929 12:06:40.796100  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:06:40.796141  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.814638  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:40.913779  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:06:40.940147  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:06:40.966181  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:06:40.992149  871091 provision.go:87] duration metric: took 477.163201ms to configureAuth
	I0929 12:06:40.992177  871091 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:06:40.992354  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:40.992402  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.010729  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.011015  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.011031  871091 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:06:41.149250  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:06:41.149283  871091 ubuntu.go:71] root file system type: overlay
	I0929 12:06:41.149434  871091 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:06:41.149508  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.169382  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.169625  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.169731  871091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:06:41.327834  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:06:41.327968  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.349146  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.349454  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.349487  871091 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:06:41.500464  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:41.500497  871091 machine.go:96] duration metric: took 4.471659866s to provisionDockerMachine
	I0929 12:06:41.500512  871091 start.go:293] postStartSetup for "no-preload-306088" (driver="docker")
	I0929 12:06:41.500527  871091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:06:41.500590  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:06:41.500647  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	W0929 12:06:38.257066  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.257540  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.404187  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:42.404863  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:41.520904  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.620006  871091 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:06:41.623863  871091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:06:41.623914  871091 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:06:41.623925  871091 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:06:41.623935  871091 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:06:41.623959  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:06:41.624015  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:06:41.624111  871091 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:06:41.624227  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:06:41.634489  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:41.661187  871091 start.go:296] duration metric: took 160.643724ms for postStartSetup
	I0929 12:06:41.661275  871091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:06:41.661317  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.679286  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.773350  871091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:06:41.778053  871091 fix.go:56] duration metric: took 5.068864392s for fixHost
	I0929 12:06:41.778084  871091 start.go:83] releasing machines lock for "no-preload-306088", held for 5.068924928s
	I0929 12:06:41.778174  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:41.796247  871091 ssh_runner.go:195] Run: cat /version.json
	I0929 12:06:41.796329  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.796378  871091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:06:41.796452  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.815939  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.816193  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.990299  871091 ssh_runner.go:195] Run: systemctl --version
	I0929 12:06:41.995288  871091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:06:42.000081  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:06:42.020438  871091 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:06:42.020518  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:06:42.029627  871091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:06:42.029658  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.029697  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.029845  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.046748  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:06:42.057142  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:06:42.067569  871091 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:06:42.067621  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:06:42.078146  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.089207  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:06:42.099515  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.109953  871091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:06:42.119715  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:06:42.130148  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:06:42.140184  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:06:42.151082  871091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:06:42.161435  871091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:06:42.171100  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.243863  871091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:06:42.322789  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.322843  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.322910  871091 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:06:42.336670  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.348890  871091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:06:42.364257  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.376038  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:06:42.387832  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.405901  871091 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:06:42.409515  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:06:42.419370  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:06:42.438082  871091 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:06:42.511679  871091 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:06:42.584368  871091 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:06:42.584521  871091 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:06:42.604074  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:06:42.615691  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.684549  871091 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:06:43.531184  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:06:43.543167  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:06:43.555540  871091 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:06:43.568219  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.580095  871091 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:06:43.648390  871091 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:06:43.718653  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.787645  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:06:43.810310  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:06:43.822583  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.892062  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:06:43.972699  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.985893  871091 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:06:43.985990  871091 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:06:43.990107  871091 start.go:563] Will wait 60s for crictl version
	I0929 12:06:43.990186  871091 ssh_runner.go:195] Run: which crictl
	I0929 12:06:43.993712  871091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:06:44.032208  871091 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:06:44.032285  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.059274  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.086497  871091 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:06:44.086597  871091 cli_runner.go:164] Run: docker network inspect no-preload-306088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:06:44.103997  871091 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 12:06:44.108202  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.121433  871091 kubeadm.go:875] updating cluster {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:06:44.121548  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:44.121582  871091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:06:44.142018  871091 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 12:06:44.142049  871091 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:06:44.142057  871091 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0929 12:06:44.142162  871091 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-306088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:06:44.142214  871091 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:06:44.196459  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:44.196503  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:44.196520  871091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:06:44.196548  871091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-306088 NodeName:no-preload-306088 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:06:44.196683  871091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-306088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:06:44.196744  871091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:06:44.206772  871091 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:06:44.206838  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:06:44.216022  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 12:06:44.234761  871091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:06:44.253842  871091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 12:06:44.274561  871091 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:06:44.278469  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.290734  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.362332  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:44.386713  871091 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088 for IP: 192.168.94.2
	I0929 12:06:44.386744  871091 certs.go:194] generating shared ca certs ...
	I0929 12:06:44.386768  871091 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.386954  871091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:06:44.387011  871091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:06:44.387021  871091 certs.go:256] generating profile certs ...
	I0929 12:06:44.387100  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/client.key
	I0929 12:06:44.387155  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key.eb5a4896
	I0929 12:06:44.387190  871091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key
	I0929 12:06:44.387288  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:06:44.387320  871091 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:06:44.387329  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:06:44.387351  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:06:44.387373  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:06:44.387393  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:06:44.387440  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:44.388149  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:06:44.419158  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:06:44.448205  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:06:44.482979  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:06:44.517557  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:06:44.549867  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:06:44.576134  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:06:44.604658  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 12:06:44.631756  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:06:44.658081  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:06:44.684187  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:06:44.710650  871091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:06:44.729717  871091 ssh_runner.go:195] Run: openssl version
	I0929 12:06:44.735824  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:06:44.745812  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749234  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749293  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.756789  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:06:44.767948  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:06:44.778834  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782611  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782681  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.790603  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:06:44.800010  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:06:44.810306  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814380  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814509  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.822959  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:06:44.834110  871091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:06:44.837912  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:06:44.844692  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:06:44.851275  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:06:44.858576  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:06:44.866396  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:06:44.875491  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:06:44.883074  871091 kubeadm.go:392] StartCluster: {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:44.883211  871091 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:06:44.904790  871091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:06:44.917300  871091 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:06:44.917322  871091 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:06:44.917374  871091 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:06:44.927571  871091 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:06:44.928675  871091 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-306088" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.929373  871091 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-306088" cluster setting kubeconfig missing "no-preload-306088" context setting]
	I0929 12:06:44.930612  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.932840  871091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:06:44.943928  871091 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 12:06:44.943969  871091 kubeadm.go:593] duration metric: took 26.639509ms to restartPrimaryControlPlane
	I0929 12:06:44.943982  871091 kubeadm.go:394] duration metric: took 60.918658ms to StartCluster
	I0929 12:06:44.944003  871091 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.944082  871091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.946478  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.946713  871091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:06:44.946792  871091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:06:44.946942  871091 addons.go:69] Setting storage-provisioner=true in profile "no-preload-306088"
	I0929 12:06:44.946950  871091 addons.go:69] Setting default-storageclass=true in profile "no-preload-306088"
	I0929 12:06:44.946967  871091 addons.go:238] Setting addon storage-provisioner=true in "no-preload-306088"
	I0929 12:06:44.946975  871091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-306088"
	I0929 12:06:44.946990  871091 addons.go:69] Setting metrics-server=true in profile "no-preload-306088"
	I0929 12:06:44.947004  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:44.947018  871091 addons.go:238] Setting addon metrics-server=true in "no-preload-306088"
	I0929 12:06:44.947007  871091 addons.go:69] Setting dashboard=true in profile "no-preload-306088"
	W0929 12:06:44.947027  871091 addons.go:247] addon metrics-server should already be in state true
	I0929 12:06:44.947041  871091 addons.go:238] Setting addon dashboard=true in "no-preload-306088"
	W0929 12:06:44.946976  871091 addons.go:247] addon storage-provisioner should already be in state true
	W0929 12:06:44.947052  871091 addons.go:247] addon dashboard should already be in state true
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947081  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947415  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947557  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947574  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947710  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.949123  871091 out.go:179] * Verifying Kubernetes components...
	I0929 12:06:44.951560  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.983162  871091 addons.go:238] Setting addon default-storageclass=true in "no-preload-306088"
	W0929 12:06:44.983184  871091 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:06:44.983259  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.983409  871091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:06:44.983471  871091 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:06:44.984010  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.984739  871091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:44.984759  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:06:44.984810  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.985006  871091 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:06:44.985094  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:06:44.985115  871091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:06:44.985173  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.989553  871091 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:06:44.990700  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:06:44.990720  871091 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:06:44.990787  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.013082  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.023016  871091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.023045  871091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:06:45.023112  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.023478  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.027093  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.046756  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.088649  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:45.131986  871091 node_ready.go:35] waiting up to 6m0s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:45.142439  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.156825  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:06:45.156854  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:06:45.157091  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:06:45.157113  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:06:45.171641  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.191370  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:06:45.191407  871091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:06:45.191600  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:06:45.191622  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:06:45.225277  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:06:45.225316  871091 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:06:45.227138  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.227166  871091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0929 12:06:45.240720  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.240807  871091 retry.go:31] will retry after 255.439226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.253570  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.253730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:06:45.253752  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:06:45.256592  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.256642  871091 retry.go:31] will retry after 176.530584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.284730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:06:45.284766  871091 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:06:45.315598  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:06:45.315629  871091 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 12:06:45.337290  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.337352  871091 retry.go:31] will retry after 216.448516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.341267  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:06:45.341293  871091 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 12:06:45.367418  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:06:45.367447  871091 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:06:45.394525  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.394579  871091 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:06:45.428230  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.433674  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.496374  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.554373  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 12:06:42.757687  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:45.257903  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:47.043268  871091 node_ready.go:49] node "no-preload-306088" is "Ready"
	I0929 12:06:47.043313  871091 node_ready.go:38] duration metric: took 1.911288329s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:47.043336  871091 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:06:47.043393  871091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:06:47.559973  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.131688912s)
	I0929 12:06:47.560210  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.126485829s)
	I0929 12:06:47.561634  871091 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-306088 addons enable metrics-server
	
	I0929 12:06:47.677198  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.180776144s)
	I0929 12:06:47.677264  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.122845465s)
	I0929 12:06:47.677276  871091 api_server.go:72] duration metric: took 2.730527098s to wait for apiserver process to appear ...
	I0929 12:06:47.677284  871091 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:06:47.677301  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:47.677300  871091 addons.go:479] Verifying addon metrics-server=true in "no-preload-306088"
	I0929 12:06:47.679081  871091 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	W0929 12:06:44.905162  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:47.405106  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:47.680000  871091 addons.go:514] duration metric: took 2.733215653s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:47.681720  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:47.681742  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.178112  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.184346  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:48.184379  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.678093  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.683059  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 12:06:48.684122  871091 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:48.684148  871091 api_server.go:131] duration metric: took 1.006856952s to wait for apiserver health ...
	I0929 12:06:48.684159  871091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:48.686922  871091 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:48.686951  871091 system_pods.go:61] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.686958  871091 system_pods.go:61] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.686972  871091 system_pods.go:61] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.686984  871091 system_pods.go:61] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.686999  871091 system_pods.go:61] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:48.687008  871091 system_pods.go:61] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.687018  871091 system_pods.go:61] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.687024  871091 system_pods.go:61] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.687035  871091 system_pods.go:74] duration metric: took 2.869523ms to wait for pod list to return data ...
	I0929 12:06:48.687047  871091 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:48.690705  871091 default_sa.go:45] found service account: "default"
	I0929 12:06:48.690730  871091 default_sa.go:55] duration metric: took 3.675534ms for default service account to be created ...
	I0929 12:06:48.690740  871091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:48.693650  871091 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:48.693684  871091 system_pods.go:89] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.693693  871091 system_pods.go:89] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.693715  871091 system_pods.go:89] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.693725  871091 system_pods.go:89] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.693733  871091 system_pods.go:89] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running
	I0929 12:06:48.693738  871091 system_pods.go:89] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.693743  871091 system_pods.go:89] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.693753  871091 system_pods.go:89] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.693770  871091 system_pods.go:126] duration metric: took 3.022951ms to wait for k8s-apps to be running ...
	I0929 12:06:48.693778  871091 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:48.693838  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:48.706595  871091 system_svc.go:56] duration metric: took 12.805298ms WaitForService to wait for kubelet
	I0929 12:06:48.706622  871091 kubeadm.go:578] duration metric: took 3.759872419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:48.706643  871091 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:48.709282  871091 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:48.709305  871091 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:48.709317  871091 node_conditions.go:105] duration metric: took 2.669783ms to run NodePressure ...
	I0929 12:06:48.709327  871091 start.go:241] waiting for startup goroutines ...
	I0929 12:06:48.709334  871091 start.go:246] waiting for cluster config update ...
	I0929 12:06:48.709345  871091 start.go:255] writing updated cluster config ...
	I0929 12:06:48.709631  871091 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:48.713435  871091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:48.716857  871091 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:50.722059  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:47.756924  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.757051  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.903749  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:51.904179  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:52.722481  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:55.222976  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:52.257245  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:54.757176  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:56.756246  861376 pod_ready.go:94] pod "coredns-66bc5c9577-zqqdn" is "Ready"
	I0929 12:06:56.756280  861376 pod_ready.go:86] duration metric: took 38.005267391s for pod "coredns-66bc5c9577-zqqdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.761541  861376 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.765343  861376 pod_ready.go:94] pod "etcd-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.765363  861376 pod_ready.go:86] duration metric: took 3.798035ms for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.767218  861376 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.770588  861376 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.770606  861376 pod_ready.go:86] duration metric: took 3.370627ms for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.772342  861376 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.955016  861376 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.955044  861376 pod_ready.go:86] duration metric: took 182.679374ms for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.155127  861376 pod_ready.go:83] waiting for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.555193  861376 pod_ready.go:94] pod "kube-proxy-bspjk" is "Ready"
	I0929 12:06:57.555220  861376 pod_ready.go:86] duration metric: took 400.064967ms for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.755450  861376 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155379  861376 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:58.155405  861376 pod_ready.go:86] duration metric: took 399.927452ms for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155417  861376 pod_ready.go:40] duration metric: took 39.40795228s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:58.201296  861376 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:06:58.203132  861376 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-414542" cluster and "default" namespace by default
	W0929 12:06:53.904220  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:56.404228  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:57.722276  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:00.222038  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:58.904138  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:00.904689  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:03.404607  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:02.722573  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.222722  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.903327  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.903942  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.722224  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.722687  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.904282  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:07:10.403750  866509 pod_ready.go:94] pod "coredns-66bc5c9577-h49hh" is "Ready"
	I0929 12:07:10.403779  866509 pod_ready.go:86] duration metric: took 34.505404913s for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.406142  866509 pod_ready.go:83] waiting for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.409848  866509 pod_ready.go:94] pod "etcd-embed-certs-031687" is "Ready"
	I0929 12:07:10.409884  866509 pod_ready.go:86] duration metric: took 3.705005ms for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.411799  866509 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.415853  866509 pod_ready.go:94] pod "kube-apiserver-embed-certs-031687" is "Ready"
	I0929 12:07:10.415901  866509 pod_ready.go:86] duration metric: took 4.068426ms for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.417734  866509 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.601598  866509 pod_ready.go:94] pod "kube-controller-manager-embed-certs-031687" is "Ready"
	I0929 12:07:10.601629  866509 pod_ready.go:86] duration metric: took 183.870372ms for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.801642  866509 pod_ready.go:83] waiting for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.201791  866509 pod_ready.go:94] pod "kube-proxy-8lx97" is "Ready"
	I0929 12:07:11.201815  866509 pod_ready.go:86] duration metric: took 400.146465ms for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.402190  866509 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802461  866509 pod_ready.go:94] pod "kube-scheduler-embed-certs-031687" is "Ready"
	I0929 12:07:11.802499  866509 pod_ready.go:86] duration metric: took 400.277946ms for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802515  866509 pod_ready.go:40] duration metric: took 35.908487233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:11.853382  866509 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:11.856798  866509 out.go:179] * Done! kubectl is now configured to use "embed-certs-031687" cluster and "default" namespace by default
	W0929 12:07:12.221602  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:14.221842  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:16.222454  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:18.722820  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:20.725000  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	I0929 12:07:21.222494  871091 pod_ready.go:94] pod "coredns-66bc5c9577-llrxw" is "Ready"
	I0929 12:07:21.222527  871091 pod_ready.go:86] duration metric: took 32.505636564s for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.225025  871091 pod_ready.go:83] waiting for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.228512  871091 pod_ready.go:94] pod "etcd-no-preload-306088" is "Ready"
	I0929 12:07:21.228529  871091 pod_ready.go:86] duration metric: took 3.482765ms for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.230262  871091 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.233598  871091 pod_ready.go:94] pod "kube-apiserver-no-preload-306088" is "Ready"
	I0929 12:07:21.233622  871091 pod_ready.go:86] duration metric: took 3.343035ms for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.235393  871091 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.421017  871091 pod_ready.go:94] pod "kube-controller-manager-no-preload-306088" is "Ready"
	I0929 12:07:21.421047  871091 pod_ready.go:86] duration metric: took 185.636666ms for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.621421  871091 pod_ready.go:83] waiting for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.020579  871091 pod_ready.go:94] pod "kube-proxy-79hf6" is "Ready"
	I0929 12:07:22.020611  871091 pod_ready.go:86] duration metric: took 399.163924ms for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.220586  871091 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620444  871091 pod_ready.go:94] pod "kube-scheduler-no-preload-306088" is "Ready"
	I0929 12:07:22.620469  871091 pod_ready.go:86] duration metric: took 399.857006ms for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620481  871091 pod_ready.go:40] duration metric: took 33.907023232s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:22.667955  871091 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:22.669694  871091 out.go:179] * Done! kubectl is now configured to use "no-preload-306088" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 12:07:52 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:07:52.834434263Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:07:53 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:07:53.837743949Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:07:53 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:07:53.882972391Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:07:53 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:07:53.883060395Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:07:53 default-k8s-diff-port-414542 cri-dockerd[1116]: time="2025-09-29T12:07:53Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:09:19 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:19.839798260Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:09:19 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:19.891978560Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:09:19 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:19.892077580Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:09:19 default-k8s-diff-port-414542 cri-dockerd[1116]: time="2025-09-29T12:09:19Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:09:25 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:25.267162574Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:09:25 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:25.267198841Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:09:25 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:25.269198441Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:09:25 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:25.269237848Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:09:25 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:25.282978474Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:09:25 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:09:25.312185679Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.001602826Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.001672306Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.003846342Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.003902227Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.841196548Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.889451316Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.889549913Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:12:08 default-k8s-diff-port-414542 cri-dockerd[1116]: time="2025-09-29T12:12:08Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:12:17 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:17.796584573Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:12:17 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:17.827167946Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3e8ebd1a20bfc       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       2                   a36e40e4be015       storage-provisioner
	780d293abd667       56cc512116c8f                                                                                         9 minutes ago       Running             busybox                   1                   c52f1bc00aa92       busybox
	4a3ca81fe2f1a       52546a367cc9e                                                                                         9 minutes ago       Running             coredns                   1                   bd94f1800e4a3       coredns-66bc5c9577-zqqdn
	f8587a790c480       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       1                   a36e40e4be015       storage-provisioner
	12638a28f3092       df0860106674d                                                                                         9 minutes ago       Running             kube-proxy                1                   cd6249d9b3faa       kube-proxy-bspjk
	7d541696821e3       46169d968e920                                                                                         9 minutes ago       Running             kube-scheduler            1                   cc91534300045       kube-scheduler-default-k8s-diff-port-414542
	d91e30763cb74       90550c43ad2bc                                                                                         9 minutes ago       Running             kube-apiserver            1                   d6b4d97a3c8cf       kube-apiserver-default-k8s-diff-port-414542
	cfcc3c32a6429       a0af72f2ec6d6                                                                                         9 minutes ago       Running             kube-controller-manager   1                   6cdca3ea59f62       kube-controller-manager-default-k8s-diff-port-414542
	63101e5318f49       5f1f5298c888d                                                                                         9 minutes ago       Running             etcd                      1                   10a4673adc5bc       etcd-default-k8s-diff-port-414542
	47156dda7bdb0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              busybox                   0                   97f983e609474       busybox
	a418c15537b4f       52546a367cc9e                                                                                         10 minutes ago      Exited              coredns                   0                   8b4c4bb9b075f       coredns-66bc5c9577-zqqdn
	cf88e0ff6e4c5       df0860106674d                                                                                         10 minutes ago      Exited              kube-proxy                0                   2653c32e79939       kube-proxy-bspjk
	f12fc2b57d5c7       a0af72f2ec6d6                                                                                         10 minutes ago      Exited              kube-controller-manager   0                   fa259aa7113b7       kube-controller-manager-default-k8s-diff-port-414542
	c052b7974c71e       90550c43ad2bc                                                                                         10 minutes ago      Exited              kube-apiserver            0                   cda4e6ba82c43       kube-apiserver-default-k8s-diff-port-414542
	7be81117198c4       46169d968e920                                                                                         10 minutes ago      Exited              kube-scheduler            0                   1f4e115702e59       kube-scheduler-default-k8s-diff-port-414542
	289ff9fbcded6       5f1f5298c888d                                                                                         10 minutes ago      Exited              etcd                      0                   1ce2a65f82bd4       etcd-default-k8s-diff-port-414542
	
	
	==> coredns [4a3ca81fe2f1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43464 - 46009 "HINFO IN 1513859665036013232.7870983957954654933. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021421812s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [a418c15537b4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45638 - 13643 "HINFO IN 4710081106409396512.4132293983694253617. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048326747s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-414542
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-414542
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=default-k8s-diff-port-414542
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_05_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:05:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-414542
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:15:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:13:26 +0000   Mon, 29 Sep 2025 12:05:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:13:26 +0000   Mon, 29 Sep 2025 12:05:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:13:26 +0000   Mon, 29 Sep 2025 12:05:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:13:26 +0000   Mon, 29 Sep 2025 12:05:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-414542
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 bcfa6945434c4edbae65e29ccc26141f
	  System UUID:                c9dfe7da-7478-4379-bb83-cc78f009c0b7
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-zqqdn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-default-k8s-diff-port-414542                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-414542             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-414542    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-bspjk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-414542             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-btxhj                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k7qd7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cxjff                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m41s                  kube-proxy       
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-414542 event: Registered Node default-k8s-diff-port-414542 in Controller
	  Normal  Starting                 9m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m45s (x8 over 9m45s)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m45s (x8 over 9m45s)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m45s (x7 over 9m45s)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m39s                  node-controller  Node default-k8s-diff-port-414542 event: Registered Node default-k8s-diff-port-414542 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e ea 9d d2 75 10 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 02 ed 9c 9f 01 b3 08 06
	[  +7.676274] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	
	
	==> etcd [289ff9fbcded] <==
	{"level":"warn","ts":"2025-09-29T12:05:31.252941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.259467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.266281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.281642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.288433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.295021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.345896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33912","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:05:56.093205Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:05:56.093289Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-414542","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:05:56.093396Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:03.094986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:03.095100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095138Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095233Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:06:03.095195Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-09-29T12:06:03.095248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095194Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095264Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:06:03.095272Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:03.095274Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T12:06:03.095287Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T12:06:03.098049Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-09-29T12:06:03.098106Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:03.098134Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T12:06:03.098143Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-414542","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [63101e5318f4] <==
	{"level":"warn","ts":"2025-09-29T12:06:16.380624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.398386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.406058Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53782","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.414409Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.421247Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.428869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.437462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.445601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.453286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.460568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.468241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.476793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.488562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.498821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.501343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.508194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.514619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.521503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.528455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.535327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.541912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.554691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.561565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.568526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.625924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 12:16:00 up  1:58,  0 users,  load average: 1.54, 1.46, 2.28
	Linux default-k8s-diff-port-414542 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c052b7974c71] <==
	W0929 12:06:05.284230       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.301748       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.349820       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.349827       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.387964       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.448541       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.459070       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.476680       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.486116       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.507000       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.523461       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.628753       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.645426       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.693281       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.704865       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.717987       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.770277       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.771501       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.915775       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.931414       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.948789       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.997933       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:06.020699       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:06.032124       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:06.055743       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d91e30763cb7] <==
	I0929 12:11:18.132358       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:12:05.678118       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:12:16.315776       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:12:18.131807       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:12:18.131887       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:12:18.131915       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:12:18.132914       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:12:18.133016       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:12:18.133032       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:13:18.717515       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:13:33.280540       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:14:18.132373       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:14:18.132439       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:14:18.132458       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:14:18.133490       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:14:18.133587       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:14:18.133607       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:14:41.572566       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:14:56.194247       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [cfcc3c32a642] <==
	I0929 12:09:50.538931       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:10:20.505512       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:10:20.546794       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:10:50.509353       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:10:50.553509       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:11:20.514313       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:11:20.561166       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:11:50.518995       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:11:50.567497       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:12:20.524057       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:12:20.574558       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:12:50.528456       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:12:50.582038       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:13:20.533662       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:13:20.589325       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:13:50.538393       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:13:50.596747       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:14:20.543200       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:14:20.603725       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:14:50.547262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:14:50.611524       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:15:20.551271       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:15:20.618349       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:15:50.555636       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:15:50.626328       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [f12fc2b57d5c] <==
	I0929 12:05:38.762442       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:05:38.762732       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:05:38.762793       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 12:05:38.762821       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:05:38.763407       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:05:38.763671       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:05:38.764642       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:05:38.764691       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 12:05:38.764731       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:05:38.764821       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-414542"
	I0929 12:05:38.764894       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:05:38.764866       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:05:38.765771       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 12:05:38.765807       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 12:05:38.767128       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 12:05:38.768695       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:05:38.768758       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:05:38.768807       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:05:38.768817       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:05:38.768830       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:05:38.772251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:05:38.772652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:38.778967       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-414542" podCIDRs=["10.244.0.0/24"]
	I0929 12:05:38.781010       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 12:05:38.792415       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [12638a28f309] <==
	I0929 12:06:18.497579       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:06:18.555564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:06:18.655767       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:06:18.655808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 12:06:18.655988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:06:18.678567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:06:18.678633       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:06:18.684359       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:06:18.684687       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:06:18.684703       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:18.685852       1 config.go:309] "Starting node config controller"
	I0929 12:06:18.685912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:06:18.685922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:06:18.685959       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:06:18.685984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:06:18.686049       1 config.go:200] "Starting service config controller"
	I0929 12:06:18.686180       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:06:18.686123       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:06:18.686237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:06:18.786828       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:06:18.786850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:06:18.786926       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [cf88e0ff6e4c] <==
	I0929 12:05:40.276143       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:05:40.344682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:05:40.445659       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:05:40.445710       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 12:05:40.447009       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:05:40.475794       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:40.475915       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:05:40.482629       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:05:40.484328       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:05:40.484449       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:40.489656       1 config.go:200] "Starting service config controller"
	I0929 12:05:40.489678       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:05:40.489705       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:05:40.489710       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:05:40.489798       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:05:40.489810       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:05:40.493045       1 config.go:309] "Starting node config controller"
	I0929 12:05:40.493111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:05:40.493139       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:05:40.590636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:05:40.590694       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:05:40.591139       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7be81117198c] <==
	E0929 12:05:31.775908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:31.776086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:05:31.776117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:31.776139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:31.776269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:31.776378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:31.776389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:05:31.776630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:05:31.777273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:32.697709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:05:32.760127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:05:32.761947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:05:32.788573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:32.814766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:32.861075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:05:32.871260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:33.022943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:33.037096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:33.075271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:05:35.872675       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:05:56.078809       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:05:56.078855       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:05:56.079248       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:05:56.079369       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:05:56.079394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7d541696821e] <==
	I0929 12:06:16.084120       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:06:17.082131       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:06:17.082258       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:06:17.082293       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:06:17.082345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:06:17.127857       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:06:17.127900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:17.132953       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:17.133130       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:17.133273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:06:17.133353       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:06:17.235448       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 12:14:17 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:17.784531    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:14:18 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:18.782325    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:14:25 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:25.782171    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:14:30 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:30.782213    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:14:32 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:32.787828    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:14:40 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:40.782124    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:14:44 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:44.782592    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:14:44 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:44.782640    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:14:51 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:51.782015    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:14:57 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:57.782317    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:14:59 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:14:59.782031    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:15:03 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:03.782458    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:15:11 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:11.782392    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:15:12 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:12.789452    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:15:15 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:15.782366    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:15:22 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:22.781988    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:15:26 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:26.785114    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:15:29 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:29.782570    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:15:33 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:33.782683    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:15:39 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:39.782023    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:15:44 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:44.783058    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:15:46 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:46.781975    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:15:54 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:54.782754    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:15:57 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:57.782413    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:15:59 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:15:59.782719    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	
	
	==> storage-provisioner [3e8ebd1a20bf] <==
	W0929 12:15:35.314066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:37.317568       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:37.321841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:39.325178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:39.330229       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:41.333429       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:41.337655       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:43.341097       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:43.346521       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:45.350536       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:45.354844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:47.358218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:47.362691       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:49.366205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:49.370180       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:51.373531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:51.377572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:53.381285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:53.386189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:55.389522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:55.393465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:57.396383       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:57.401399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:59.404481       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:59.408132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f8587a790c48] <==
	I0929 12:06:18.473754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:06:48.478215       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 describe pod metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-414542 describe pod metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff: exit status 1 (65.055986ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-btxhj" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-k7qd7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-cxjff" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-414542 describe pod metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.37s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9zp7" [3644e7d0-9ed1-4318-b46e-d6c46932ae65] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:07:13.293789  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031687 -n embed-certs-031687
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:16:12.511553122 +0000 UTC m=+3854.555365215
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-031687 describe po kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-031687 describe po kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-l9zp7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-031687/192.168.76.2
Start Time:       Mon, 29 Sep 2025 12:06:38 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4mkh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-k4mkh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m34s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7 to embed-certs-031687
Normal   Pulling    6m33s (x5 over 9m34s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m33s (x5 over 9m34s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m33s (x5 over 9m34s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m25s (x21 over 9m33s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m25s (x21 over 9m33s)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-031687 logs kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-031687 logs kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard: exit status 1 (74.687958ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-l9zp7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-031687 logs kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-031687
helpers_test.go:243: (dbg) docker inspect embed-certs-031687:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04",
	        "Created": "2025-09-29T12:05:01.102607645Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 866700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:06:24.05064812Z",
	            "FinishedAt": "2025-09-29T12:06:23.219404934Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/hosts",
	        "LogPath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04-json.log",
	        "Name": "/embed-certs-031687",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-031687:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-031687",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04",
	                "LowerDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-031687",
	                "Source": "/var/lib/docker/volumes/embed-certs-031687/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-031687",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-031687",
	                "name.minikube.sigs.k8s.io": "embed-certs-031687",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cfd1fe5476ded4503d7cb9d88249e773444e93173c3f2a335f7be1b4bde0bc8",
	            "SandboxKey": "/var/run/docker/netns/8cfd1fe5476d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-031687": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:76:b8:93:d7:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bcd2926a5ec53b938330bde349b95cf914c53ca94ae1c2f503c01a3cdcda13e2",
	                    "EndpointID": "16d33f17954c4be00250a5728ca37721615d4dc68bfaf37d18d59cb5ac36f637",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-031687",
	                        "e4f6355ca9ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031687 -n embed-certs-031687
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-031687 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-031687 logs -n 25: (1.072228995s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p calico-934155 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo cat /etc/containerd/config.toml                                                                                                                                                                                           │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo containerd config dump                                                                                                                                                                                                    │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p disable-driver-mounts-929504                                                                                                                                                                                                                 │ disable-driver-mounts-929504 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │                     │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl cat crio --no-pager                                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                   │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo crio config                                                                                                                                                                                                               │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p calico-934155                                                                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-858855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ stop    │ -p default-k8s-diff-port-414542 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-031687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p embed-certs-031687 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-031687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p no-preload-306088 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:06:36.516482  871091 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:06:36.516771  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.516782  871091 out.go:374] Setting ErrFile to fd 2...
	I0929 12:06:36.516786  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.517034  871091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:06:36.517566  871091 out.go:368] Setting JSON to false
	I0929 12:06:36.519099  871091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6540,"bootTime":1759141056,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:06:36.519186  871091 start.go:140] virtualization: kvm guest
	I0929 12:06:36.521306  871091 out.go:179] * [no-preload-306088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:06:36.522994  871091 notify.go:220] Checking for updates...
	I0929 12:06:36.523025  871091 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:06:36.524361  871091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:06:36.526212  871091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:36.527856  871091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:06:36.529330  871091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:06:36.530640  871091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:06:36.532489  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:36.532971  871091 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:06:36.557847  871091 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:06:36.557955  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.619389  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.606711858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.619500  871091 docker.go:318] overlay module found
	I0929 12:06:36.621623  871091 out.go:179] * Using the docker driver based on existing profile
	I0929 12:06:36.622958  871091 start.go:304] selected driver: docker
	I0929 12:06:36.622977  871091 start.go:924] validating driver "docker" against &{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.623069  871091 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:06:36.623939  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.681042  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.670856635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.681348  871091 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:36.681383  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:36.681440  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:36.681496  871091 start.go:348] cluster config:
	{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.683409  871091 out.go:179] * Starting "no-preload-306088" primary control-plane node in "no-preload-306088" cluster
	I0929 12:06:36.684655  871091 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:06:36.685791  871091 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:06:36.686923  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:36.687033  871091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:06:36.687071  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:36.687230  871091 cache.go:107] acquiring lock: {Name:mk458b8403b4159d98f7ca606060a1e77262160a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687232  871091 cache.go:107] acquiring lock: {Name:mkf63d99dbdfbf068ef033ecf191a655730e20a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687337  871091 cache.go:107] acquiring lock: {Name:mkd9e4857d62d04bc7d49138f7e4fb0f42e97bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687338  871091 cache.go:107] acquiring lock: {Name:mk4450faafd650ccd11a718cb9b7190d17ab5337 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687401  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 12:06:36.687412  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 12:06:36.687392  871091 cache.go:107] acquiring lock: {Name:mkbcd57035e12e42444c6b36c8f1b923cbfef46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687414  871091 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 202.746µs
	I0929 12:06:36.687421  871091 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 90.507µs
	I0929 12:06:36.687399  871091 cache.go:107] acquiring lock: {Name:mkde0ed0d421c77cb34c222a8ab10a2c13e3e1ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687387  871091 cache.go:107] acquiring lock: {Name:mk11769872d039acf11fe2041fd2e18abd2ae3a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687446  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 12:06:36.687455  871091 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 64.616µs
	I0929 12:06:36.687464  871091 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 12:06:36.687467  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 12:06:36.687476  871091 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 144.146µs
	I0929 12:06:36.687484  871091 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 12:06:36.687374  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 12:06:36.687507  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 12:06:36.687466  871091 cache.go:107] acquiring lock: {Name:mk481f9282d27c94586ac987d8a6cd5ea0f1d68c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687587  871091 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.629µs
	I0929 12:06:36.687586  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 12:06:36.687603  871091 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 12:06:36.687581  871091 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 346.559µs
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 12:06:36.687607  871091 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 276.399µs
	I0929 12:06:36.687618  871091 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 12:06:36.687620  871091 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 12:06:36.687628  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 12:06:36.687644  871091 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 230.083µs
	I0929 12:06:36.687655  871091 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 12:06:36.687663  871091 cache.go:87] Successfully saved all images to host disk.
	I0929 12:06:36.709009  871091 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:06:36.709031  871091 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:06:36.709049  871091 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:06:36.709083  871091 start.go:360] acquireMachinesLock for no-preload-306088: {Name:mk0ed8d49a268e0ff510517b50934257047b58c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.709145  871091 start.go:364] duration metric: took 44.22µs to acquireMachinesLock for "no-preload-306088"
	I0929 12:06:36.709171  871091 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:06:36.709180  871091 fix.go:54] fixHost starting: 
	I0929 12:06:36.709410  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:36.728528  871091 fix.go:112] recreateIfNeeded on no-preload-306088: state=Stopped err=<nil>
	W0929 12:06:36.728557  871091 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 12:06:33.757650  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:35.757705  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:34.860020  866509 addons.go:514] duration metric: took 2.511095137s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:34.860298  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:34.860316  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.355994  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.362405  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:35.362444  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.855983  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.860174  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 12:06:35.861328  866509 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:35.861365  866509 api_server.go:131] duration metric: took 1.00564321s to wait for apiserver health ...
	I0929 12:06:35.861375  866509 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:35.865988  866509 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:35.866018  866509 system_pods.go:61] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.866030  866509 system_pods.go:61] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.866041  866509 system_pods.go:61] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.866050  866509 system_pods.go:61] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.866055  866509 system_pods.go:61] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.866062  866509 system_pods.go:61] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.866068  866509 system_pods.go:61] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.866076  866509 system_pods.go:61] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.866083  866509 system_pods.go:74] duration metric: took 4.69699ms to wait for pod list to return data ...
	I0929 12:06:35.866093  866509 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:35.868695  866509 default_sa.go:45] found service account: "default"
	I0929 12:06:35.868715  866509 default_sa.go:55] duration metric: took 2.61564ms for default service account to be created ...
	I0929 12:06:35.868726  866509 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:35.872060  866509 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:35.872097  866509 system_pods.go:89] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.872135  866509 system_pods.go:89] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.872153  866509 system_pods.go:89] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.872164  866509 system_pods.go:89] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.872173  866509 system_pods.go:89] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.872187  866509 system_pods.go:89] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.872200  866509 system_pods.go:89] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.872215  866509 system_pods.go:89] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.872229  866509 system_pods.go:126] duration metric: took 3.496882ms to wait for k8s-apps to be running ...
	I0929 12:06:35.872241  866509 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:35.872298  866509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:35.886596  866509 system_svc.go:56] duration metric: took 14.342667ms WaitForService to wait for kubelet
	I0929 12:06:35.886631  866509 kubeadm.go:578] duration metric: took 3.537789699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:35.886658  866509 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:35.889756  866509 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:35.889792  866509 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:35.889815  866509 node_conditions.go:105] duration metric: took 3.143621ms to run NodePressure ...
	I0929 12:06:35.889827  866509 start.go:241] waiting for startup goroutines ...
	I0929 12:06:35.889846  866509 start.go:246] waiting for cluster config update ...
	I0929 12:06:35.889860  866509 start.go:255] writing updated cluster config ...
	I0929 12:06:35.890142  866509 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:35.893992  866509 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:35.898350  866509 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:37.904542  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:36.730585  871091 out.go:252] * Restarting existing docker container for "no-preload-306088" ...
	I0929 12:06:36.730671  871091 cli_runner.go:164] Run: docker start no-preload-306088
	I0929 12:06:36.986434  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:37.007128  871091 kic.go:430] container "no-preload-306088" state is running.
	I0929 12:06:37.007513  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:37.028527  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:37.028818  871091 machine.go:93] provisionDockerMachine start ...
	I0929 12:06:37.028949  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:37.047803  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:37.048197  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:37.048230  871091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:06:37.048917  871091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35296->127.0.0.1:33523: read: connection reset by peer
	I0929 12:06:40.187221  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.187251  871091 ubuntu.go:182] provisioning hostname "no-preload-306088"
	I0929 12:06:40.187303  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.206043  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.206254  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.206273  871091 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-306088 && echo "no-preload-306088" | sudo tee /etc/hostname
	I0929 12:06:40.358816  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.358923  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.377596  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.377950  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.377981  871091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-306088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-306088/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-306088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:06:40.514897  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:40.514933  871091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:06:40.514962  871091 ubuntu.go:190] setting up certificates
	I0929 12:06:40.514972  871091 provision.go:84] configureAuth start
	I0929 12:06:40.515033  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:40.534028  871091 provision.go:143] copyHostCerts
	I0929 12:06:40.534112  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:06:40.534132  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:06:40.534221  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:06:40.534378  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:06:40.534391  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:06:40.534433  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:06:40.534548  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:06:40.534559  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:06:40.534599  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:06:40.534700  871091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.no-preload-306088 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-306088]
	I0929 12:06:40.796042  871091 provision.go:177] copyRemoteCerts
	I0929 12:06:40.796100  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:06:40.796141  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.814638  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:40.913779  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:06:40.940147  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:06:40.966181  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:06:40.992149  871091 provision.go:87] duration metric: took 477.163201ms to configureAuth
	I0929 12:06:40.992177  871091 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:06:40.992354  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:40.992402  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.010729  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.011015  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.011031  871091 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:06:41.149250  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:06:41.149283  871091 ubuntu.go:71] root file system type: overlay
	I0929 12:06:41.149434  871091 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:06:41.149508  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.169382  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.169625  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.169731  871091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:06:41.327834  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:06:41.327968  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.349146  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.349454  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.349487  871091 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:06:41.500464  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:41.500497  871091 machine.go:96] duration metric: took 4.471659866s to provisionDockerMachine
	I0929 12:06:41.500512  871091 start.go:293] postStartSetup for "no-preload-306088" (driver="docker")
	I0929 12:06:41.500527  871091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:06:41.500590  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:06:41.500647  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	W0929 12:06:38.257066  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.257540  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.404187  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:42.404863  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:41.520904  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.620006  871091 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:06:41.623863  871091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:06:41.623914  871091 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:06:41.623925  871091 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:06:41.623935  871091 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:06:41.623959  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:06:41.624015  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:06:41.624111  871091 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:06:41.624227  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:06:41.634489  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:41.661187  871091 start.go:296] duration metric: took 160.643724ms for postStartSetup
	I0929 12:06:41.661275  871091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:06:41.661317  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.679286  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.773350  871091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:06:41.778053  871091 fix.go:56] duration metric: took 5.068864392s for fixHost
	I0929 12:06:41.778084  871091 start.go:83] releasing machines lock for "no-preload-306088", held for 5.068924928s
	I0929 12:06:41.778174  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:41.796247  871091 ssh_runner.go:195] Run: cat /version.json
	I0929 12:06:41.796329  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.796378  871091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:06:41.796452  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.815939  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.816193  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.990299  871091 ssh_runner.go:195] Run: systemctl --version
	I0929 12:06:41.995288  871091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:06:42.000081  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:06:42.020438  871091 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:06:42.020518  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:06:42.029627  871091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:06:42.029658  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.029697  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.029845  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.046748  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:06:42.057142  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:06:42.067569  871091 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:06:42.067621  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:06:42.078146  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.089207  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:06:42.099515  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.109953  871091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:06:42.119715  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:06:42.130148  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:06:42.140184  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:06:42.151082  871091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:06:42.161435  871091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:06:42.171100  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.243863  871091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:06:42.322789  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.322843  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.322910  871091 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:06:42.336670  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.348890  871091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:06:42.364257  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.376038  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:06:42.387832  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.405901  871091 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:06:42.409515  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:06:42.419370  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:06:42.438082  871091 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:06:42.511679  871091 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:06:42.584368  871091 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:06:42.584521  871091 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:06:42.604074  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:06:42.615691  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.684549  871091 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:06:43.531184  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:06:43.543167  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:06:43.555540  871091 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:06:43.568219  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.580095  871091 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:06:43.648390  871091 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:06:43.718653  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.787645  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:06:43.810310  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:06:43.822583  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.892062  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:06:43.972699  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.985893  871091 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:06:43.985990  871091 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:06:43.990107  871091 start.go:563] Will wait 60s for crictl version
	I0929 12:06:43.990186  871091 ssh_runner.go:195] Run: which crictl
	I0929 12:06:43.993712  871091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:06:44.032208  871091 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:06:44.032285  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.059274  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.086497  871091 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:06:44.086597  871091 cli_runner.go:164] Run: docker network inspect no-preload-306088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:06:44.103997  871091 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 12:06:44.108202  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.121433  871091 kubeadm.go:875] updating cluster {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:06:44.121548  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:44.121582  871091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:06:44.142018  871091 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 12:06:44.142049  871091 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:06:44.142057  871091 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0929 12:06:44.142162  871091 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-306088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:06:44.142214  871091 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:06:44.196459  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:44.196503  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:44.196520  871091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:06:44.196548  871091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-306088 NodeName:no-preload-306088 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:06:44.196683  871091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-306088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:06:44.196744  871091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:06:44.206772  871091 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:06:44.206838  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:06:44.216022  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 12:06:44.234761  871091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:06:44.253842  871091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 12:06:44.274561  871091 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:06:44.278469  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.290734  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.362332  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:44.386713  871091 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088 for IP: 192.168.94.2
	I0929 12:06:44.386744  871091 certs.go:194] generating shared ca certs ...
	I0929 12:06:44.386768  871091 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.386954  871091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:06:44.387011  871091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:06:44.387021  871091 certs.go:256] generating profile certs ...
	I0929 12:06:44.387100  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/client.key
	I0929 12:06:44.387155  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key.eb5a4896
	I0929 12:06:44.387190  871091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key
	I0929 12:06:44.387288  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:06:44.387320  871091 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:06:44.387329  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:06:44.387351  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:06:44.387373  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:06:44.387393  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:06:44.387440  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:44.388149  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:06:44.419158  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:06:44.448205  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:06:44.482979  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:06:44.517557  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:06:44.549867  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:06:44.576134  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:06:44.604658  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 12:06:44.631756  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:06:44.658081  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:06:44.684187  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:06:44.710650  871091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:06:44.729717  871091 ssh_runner.go:195] Run: openssl version
	I0929 12:06:44.735824  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:06:44.745812  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749234  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749293  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.756789  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:06:44.767948  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:06:44.778834  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782611  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782681  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.790603  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:06:44.800010  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:06:44.810306  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814380  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814509  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.822959  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:06:44.834110  871091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:06:44.837912  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:06:44.844692  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:06:44.851275  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:06:44.858576  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:06:44.866396  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:06:44.875491  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:06:44.883074  871091 kubeadm.go:392] StartCluster: {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:44.883211  871091 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:06:44.904790  871091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:06:44.917300  871091 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:06:44.917322  871091 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:06:44.917374  871091 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:06:44.927571  871091 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:06:44.928675  871091 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-306088" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.929373  871091 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-306088" cluster setting kubeconfig missing "no-preload-306088" context setting]
	I0929 12:06:44.930612  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.932840  871091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:06:44.943928  871091 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 12:06:44.943969  871091 kubeadm.go:593] duration metric: took 26.639509ms to restartPrimaryControlPlane
	I0929 12:06:44.943982  871091 kubeadm.go:394] duration metric: took 60.918658ms to StartCluster
	I0929 12:06:44.944003  871091 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.944082  871091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.946478  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.946713  871091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:06:44.946792  871091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:06:44.946942  871091 addons.go:69] Setting storage-provisioner=true in profile "no-preload-306088"
	I0929 12:06:44.946950  871091 addons.go:69] Setting default-storageclass=true in profile "no-preload-306088"
	I0929 12:06:44.946967  871091 addons.go:238] Setting addon storage-provisioner=true in "no-preload-306088"
	I0929 12:06:44.946975  871091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-306088"
	I0929 12:06:44.946990  871091 addons.go:69] Setting metrics-server=true in profile "no-preload-306088"
	I0929 12:06:44.947004  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:44.947018  871091 addons.go:238] Setting addon metrics-server=true in "no-preload-306088"
	I0929 12:06:44.947007  871091 addons.go:69] Setting dashboard=true in profile "no-preload-306088"
	W0929 12:06:44.947027  871091 addons.go:247] addon metrics-server should already be in state true
	I0929 12:06:44.947041  871091 addons.go:238] Setting addon dashboard=true in "no-preload-306088"
	W0929 12:06:44.946976  871091 addons.go:247] addon storage-provisioner should already be in state true
	W0929 12:06:44.947052  871091 addons.go:247] addon dashboard should already be in state true
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947081  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947415  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947557  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947574  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947710  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.949123  871091 out.go:179] * Verifying Kubernetes components...
	I0929 12:06:44.951560  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.983162  871091 addons.go:238] Setting addon default-storageclass=true in "no-preload-306088"
	W0929 12:06:44.983184  871091 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:06:44.983259  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.983409  871091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:06:44.983471  871091 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:06:44.984010  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.984739  871091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:44.984759  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:06:44.984810  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.985006  871091 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:06:44.985094  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:06:44.985115  871091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:06:44.985173  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.989553  871091 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:06:44.990700  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:06:44.990720  871091 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:06:44.990787  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.013082  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.023016  871091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.023045  871091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:06:45.023112  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.023478  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.027093  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.046756  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.088649  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:45.131986  871091 node_ready.go:35] waiting up to 6m0s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:45.142439  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.156825  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:06:45.156854  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:06:45.157091  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:06:45.157113  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:06:45.171641  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.191370  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:06:45.191407  871091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:06:45.191600  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:06:45.191622  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:06:45.225277  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:06:45.225316  871091 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:06:45.227138  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.227166  871091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0929 12:06:45.240720  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.240807  871091 retry.go:31] will retry after 255.439226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.253570  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.253730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:06:45.253752  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:06:45.256592  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.256642  871091 retry.go:31] will retry after 176.530584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.284730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:06:45.284766  871091 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:06:45.315598  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:06:45.315629  871091 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 12:06:45.337290  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.337352  871091 retry.go:31] will retry after 216.448516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.341267  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:06:45.341293  871091 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 12:06:45.367418  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:06:45.367447  871091 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:06:45.394525  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.394579  871091 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:06:45.428230  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.433674  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.496374  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.554373  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 12:06:42.757687  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:45.257903  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:47.043268  871091 node_ready.go:49] node "no-preload-306088" is "Ready"
	I0929 12:06:47.043313  871091 node_ready.go:38] duration metric: took 1.911288329s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:47.043336  871091 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:06:47.043393  871091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:06:47.559973  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.131688912s)
	I0929 12:06:47.560210  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.126485829s)
	I0929 12:06:47.561634  871091 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-306088 addons enable metrics-server
	
	I0929 12:06:47.677198  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.180776144s)
	I0929 12:06:47.677264  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.122845465s)
	I0929 12:06:47.677276  871091 api_server.go:72] duration metric: took 2.730527098s to wait for apiserver process to appear ...
	I0929 12:06:47.677284  871091 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:06:47.677301  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:47.677300  871091 addons.go:479] Verifying addon metrics-server=true in "no-preload-306088"
	I0929 12:06:47.679081  871091 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	W0929 12:06:44.905162  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:47.405106  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:47.680000  871091 addons.go:514] duration metric: took 2.733215653s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:47.681720  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:47.681742  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.178112  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.184346  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:48.184379  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.678093  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.683059  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 12:06:48.684122  871091 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:48.684148  871091 api_server.go:131] duration metric: took 1.006856952s to wait for apiserver health ...
	I0929 12:06:48.684159  871091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:48.686922  871091 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:48.686951  871091 system_pods.go:61] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.686958  871091 system_pods.go:61] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.686972  871091 system_pods.go:61] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.686984  871091 system_pods.go:61] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.686999  871091 system_pods.go:61] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:48.687008  871091 system_pods.go:61] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.687018  871091 system_pods.go:61] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.687024  871091 system_pods.go:61] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.687035  871091 system_pods.go:74] duration metric: took 2.869523ms to wait for pod list to return data ...
	I0929 12:06:48.687047  871091 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:48.690705  871091 default_sa.go:45] found service account: "default"
	I0929 12:06:48.690730  871091 default_sa.go:55] duration metric: took 3.675534ms for default service account to be created ...
	I0929 12:06:48.690740  871091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:48.693650  871091 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:48.693684  871091 system_pods.go:89] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.693693  871091 system_pods.go:89] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.693715  871091 system_pods.go:89] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.693725  871091 system_pods.go:89] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.693733  871091 system_pods.go:89] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running
	I0929 12:06:48.693738  871091 system_pods.go:89] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.693743  871091 system_pods.go:89] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.693753  871091 system_pods.go:89] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.693770  871091 system_pods.go:126] duration metric: took 3.022951ms to wait for k8s-apps to be running ...
	I0929 12:06:48.693778  871091 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:48.693838  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:48.706595  871091 system_svc.go:56] duration metric: took 12.805298ms WaitForService to wait for kubelet
	I0929 12:06:48.706622  871091 kubeadm.go:578] duration metric: took 3.759872419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:48.706643  871091 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:48.709282  871091 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:48.709305  871091 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:48.709317  871091 node_conditions.go:105] duration metric: took 2.669783ms to run NodePressure ...
	I0929 12:06:48.709327  871091 start.go:241] waiting for startup goroutines ...
	I0929 12:06:48.709334  871091 start.go:246] waiting for cluster config update ...
	I0929 12:06:48.709345  871091 start.go:255] writing updated cluster config ...
	I0929 12:06:48.709631  871091 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:48.713435  871091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:48.716857  871091 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:50.722059  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:47.756924  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.757051  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.903749  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:51.904179  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:52.722481  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:55.222976  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:52.257245  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:54.757176  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:56.756246  861376 pod_ready.go:94] pod "coredns-66bc5c9577-zqqdn" is "Ready"
	I0929 12:06:56.756280  861376 pod_ready.go:86] duration metric: took 38.005267391s for pod "coredns-66bc5c9577-zqqdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.761541  861376 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.765343  861376 pod_ready.go:94] pod "etcd-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.765363  861376 pod_ready.go:86] duration metric: took 3.798035ms for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.767218  861376 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.770588  861376 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.770606  861376 pod_ready.go:86] duration metric: took 3.370627ms for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.772342  861376 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.955016  861376 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.955044  861376 pod_ready.go:86] duration metric: took 182.679374ms for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.155127  861376 pod_ready.go:83] waiting for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.555193  861376 pod_ready.go:94] pod "kube-proxy-bspjk" is "Ready"
	I0929 12:06:57.555220  861376 pod_ready.go:86] duration metric: took 400.064967ms for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.755450  861376 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155379  861376 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:58.155405  861376 pod_ready.go:86] duration metric: took 399.927452ms for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155417  861376 pod_ready.go:40] duration metric: took 39.40795228s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:58.201296  861376 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:06:58.203132  861376 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-414542" cluster and "default" namespace by default
	W0929 12:06:53.904220  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:56.404228  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:57.722276  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:00.222038  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:58.904138  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:00.904689  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:03.404607  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:02.722573  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.222722  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.903327  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.903942  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.722224  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.722687  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.904282  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:07:10.403750  866509 pod_ready.go:94] pod "coredns-66bc5c9577-h49hh" is "Ready"
	I0929 12:07:10.403779  866509 pod_ready.go:86] duration metric: took 34.505404913s for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.406142  866509 pod_ready.go:83] waiting for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.409848  866509 pod_ready.go:94] pod "etcd-embed-certs-031687" is "Ready"
	I0929 12:07:10.409884  866509 pod_ready.go:86] duration metric: took 3.705005ms for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.411799  866509 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.415853  866509 pod_ready.go:94] pod "kube-apiserver-embed-certs-031687" is "Ready"
	I0929 12:07:10.415901  866509 pod_ready.go:86] duration metric: took 4.068426ms for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.417734  866509 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.601598  866509 pod_ready.go:94] pod "kube-controller-manager-embed-certs-031687" is "Ready"
	I0929 12:07:10.601629  866509 pod_ready.go:86] duration metric: took 183.870372ms for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.801642  866509 pod_ready.go:83] waiting for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.201791  866509 pod_ready.go:94] pod "kube-proxy-8lx97" is "Ready"
	I0929 12:07:11.201815  866509 pod_ready.go:86] duration metric: took 400.146465ms for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.402190  866509 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802461  866509 pod_ready.go:94] pod "kube-scheduler-embed-certs-031687" is "Ready"
	I0929 12:07:11.802499  866509 pod_ready.go:86] duration metric: took 400.277946ms for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802515  866509 pod_ready.go:40] duration metric: took 35.908487233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:11.853382  866509 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:11.856798  866509 out.go:179] * Done! kubectl is now configured to use "embed-certs-031687" cluster and "default" namespace by default
	W0929 12:07:12.221602  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:14.221842  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:16.222454  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:18.722820  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:20.725000  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	I0929 12:07:21.222494  871091 pod_ready.go:94] pod "coredns-66bc5c9577-llrxw" is "Ready"
	I0929 12:07:21.222527  871091 pod_ready.go:86] duration metric: took 32.505636564s for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.225025  871091 pod_ready.go:83] waiting for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.228512  871091 pod_ready.go:94] pod "etcd-no-preload-306088" is "Ready"
	I0929 12:07:21.228529  871091 pod_ready.go:86] duration metric: took 3.482765ms for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.230262  871091 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.233598  871091 pod_ready.go:94] pod "kube-apiserver-no-preload-306088" is "Ready"
	I0929 12:07:21.233622  871091 pod_ready.go:86] duration metric: took 3.343035ms for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.235393  871091 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.421017  871091 pod_ready.go:94] pod "kube-controller-manager-no-preload-306088" is "Ready"
	I0929 12:07:21.421047  871091 pod_ready.go:86] duration metric: took 185.636666ms for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.621421  871091 pod_ready.go:83] waiting for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.020579  871091 pod_ready.go:94] pod "kube-proxy-79hf6" is "Ready"
	I0929 12:07:22.020611  871091 pod_ready.go:86] duration metric: took 399.163924ms for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.220586  871091 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620444  871091 pod_ready.go:94] pod "kube-scheduler-no-preload-306088" is "Ready"
	I0929 12:07:22.620469  871091 pod_ready.go:86] duration metric: took 399.857006ms for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620481  871091 pod_ready.go:40] duration metric: took 33.907023232s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:22.667955  871091 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:22.669694  871091 out.go:179] * Done! kubectl is now configured to use "no-preload-306088" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 12:08:10 embed-certs-031687 dockerd[822]: time="2025-09-29T12:08:10.013835433Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:08:10 embed-certs-031687 dockerd[822]: time="2025-09-29T12:08:10.013969830Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:08:10 embed-certs-031687 cri-dockerd[1137]: time="2025-09-29T12:08:10Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:08:10 embed-certs-031687 dockerd[822]: time="2025-09-29T12:08:10.030793633Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:08:10 embed-certs-031687 dockerd[822]: time="2025-09-29T12:08:10.061002002Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:09:39 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:39.084851505Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:09:39 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:39.084912194Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:09:39 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:39.087163713Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:09:39 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:39.087201664Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:09:39 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:39.924765614Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:09:39 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:39.955765596Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:09:43 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:43.972319330Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:09:44 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:44.017809593Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:09:44 embed-certs-031687 dockerd[822]: time="2025-09-29T12:09:44.017925060Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:09:44 embed-certs-031687 cri-dockerd[1137]: time="2025-09-29T12:09:44Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:12:28 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:28.927023445Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:12:28 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:28.961813457Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.030701176Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.030746088Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.032729206Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.032768936Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.091938083Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.142342471Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.142457472Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:12:31 embed-certs-031687 cri-dockerd[1137]: time="2025-09-29T12:12:31Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7cfd570c5c36d       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       2                   4af3b4b1eeadf       storage-provisioner
	1bb6a696f26fe       56cc512116c8f                                                                                         9 minutes ago       Running             busybox                   1                   d03c130d10ba2       busybox
	65f801ea60ac6       52546a367cc9e                                                                                         9 minutes ago       Running             coredns                   1                   4da192a695c28       coredns-66bc5c9577-h49hh
	45741390c4acf       df0860106674d                                                                                         9 minutes ago       Running             kube-proxy                1                   c55e5d2f2ec55       kube-proxy-8lx97
	cd9c371dd7393       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       1                   4af3b4b1eeadf       storage-provisioner
	27f5ea637472f       5f1f5298c888d                                                                                         9 minutes ago       Running             etcd                      1                   a0608b8f66091       etcd-embed-certs-031687
	916456bc8bfb2       a0af72f2ec6d6                                                                                         9 minutes ago       Running             kube-controller-manager   1                   96caa39c99c65       kube-controller-manager-embed-certs-031687
	312c71e7e1091       90550c43ad2bc                                                                                         9 minutes ago       Running             kube-apiserver            1                   658dff92c25e2       kube-apiserver-embed-certs-031687
	468b88a7167c9       46169d968e920                                                                                         9 minutes ago       Running             kube-scheduler            1                   4ac0078e59a95       kube-scheduler-embed-certs-031687
	9d0e4dcfe570e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              busybox                   0                   fd46c3ca34837       busybox
	3a1ef2c419226       52546a367cc9e                                                                                         10 minutes ago      Exited              coredns                   0                   ed4fce6488e6d       coredns-66bc5c9577-h49hh
	b0b17b7d55279       df0860106674d                                                                                         10 minutes ago      Exited              kube-proxy                0                   8f6ae65849b90       kube-proxy-8lx97
	0f7e04b4b32c9       a0af72f2ec6d6                                                                                         10 minutes ago      Exited              kube-controller-manager   0                   d0eee0a7fb6d8       kube-controller-manager-embed-certs-031687
	f99b1cd1736c0       90550c43ad2bc                                                                                         10 minutes ago      Exited              kube-apiserver            0                   90e66d4ed1426       kube-apiserver-embed-certs-031687
	9c9d110cd2307       5f1f5298c888d                                                                                         10 minutes ago      Exited              etcd                      0                   87c3183dd5d82       etcd-embed-certs-031687
	90223f818ad9b       46169d968e920                                                                                         10 minutes ago      Exited              kube-scheduler            0                   59c7ac5354001       kube-scheduler-embed-certs-031687
	
	
	==> coredns [3a1ef2c41922] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46583 - 57672 "HINFO IN 4837871372873753732.8949169030992615212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012772655s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [65f801ea60ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50261 - 15654 "HINFO IN 6548381319171350955.8783735724164066773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415128803s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-031687
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-031687
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=embed-certs-031687
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_05_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:05:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-031687
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:16:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:12:30 +0000   Mon, 29 Sep 2025 12:05:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:12:30 +0000   Mon, 29 Sep 2025 12:05:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:12:30 +0000   Mon, 29 Sep 2025 12:05:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:12:30 +0000   Mon, 29 Sep 2025 12:05:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-031687
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3db81bd2691471cb1038dba05261875
	  System UUID:                bc2311f8-9925-42fe-a1ac-db9ee40b62fe
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-h49hh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-embed-certs-031687                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-embed-certs-031687             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-embed-certs-031687    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-8lx97                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-embed-certs-031687             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-w5slh               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-77hqb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l9zp7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m37s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node embed-certs-031687 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node embed-certs-031687 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node embed-certs-031687 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node embed-certs-031687 event: Registered Node embed-certs-031687 in Controller
	  Normal  Starting                 9m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m41s (x8 over 9m42s)  kubelet          Node embed-certs-031687 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m41s (x8 over 9m42s)  kubelet          Node embed-certs-031687 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m41s (x7 over 9m42s)  kubelet          Node embed-certs-031687 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m36s                  node-controller  Node embed-certs-031687 event: Registered Node embed-certs-031687 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e ea 9d d2 75 10 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 02 ed 9c 9f 01 b3 08 06
	[  +7.676274] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	
	
	==> etcd [27f5ea637472] <==
	{"level":"warn","ts":"2025-09-29T12:06:33.655268Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.661517Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:35982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.669725Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36016","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.676030Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.682405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.688697Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36028","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.694808Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36048","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.703824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.710419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.716966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.725020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.731313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.737427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.743333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.749548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.761756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.763617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.769866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.775868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.782263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.788312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.794109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.806295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.812945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.818960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	
	
	==> etcd [9c9d110cd230] <==
	{"level":"info","ts":"2025-09-29T12:05:20.253137Z","caller":"traceutil/trace.go:172","msg":"trace[438781435] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"216.614403ms","start":"2025-09-29T12:05:20.036516Z","end":"2025-09-29T12:05:20.253131Z","steps":["trace[438781435] 'process raft request'  (duration: 216.108114ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.253177Z","caller":"traceutil/trace.go:172","msg":"trace[178786943] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"216.076305ms","start":"2025-09-29T12:05:20.037091Z","end":"2025-09-29T12:05:20.253167Z","steps":["trace[178786943] 'process raft request'  (duration: 215.721542ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.253177Z","caller":"traceutil/trace.go:172","msg":"trace[2052007329] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"216.315627ms","start":"2025-09-29T12:05:20.036851Z","end":"2025-09-29T12:05:20.253166Z","steps":["trace[2052007329] 'process raft request'  (duration: 215.856225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T12:05:20.315571Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.503296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-09-29T12:05:20.315644Z","caller":"traceutil/trace.go:172","msg":"trace[1802507612] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:20; }","duration":"101.589942ms","start":"2025-09-29T12:05:20.214042Z","end":"2025-09-29T12:05:20.315632Z","steps":["trace[1802507612] 'agreement among raft nodes before linearized reading'  (duration: 99.529105ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.316203Z","caller":"traceutil/trace.go:172","msg":"trace[590188282] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"144.624881ms","start":"2025-09-29T12:05:20.171567Z","end":"2025-09-29T12:05:20.316192Z","steps":["trace[590188282] 'process raft request'  (duration: 142.020353ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.316481Z","caller":"traceutil/trace.go:172","msg":"trace[2129118222] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"140.057316ms","start":"2025-09-29T12:05:20.176413Z","end":"2025-09-29T12:05:20.316470Z","steps":["trace[2129118222] 'process raft request'  (duration: 139.345283ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:06:12.985194Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:06:12.985271Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-031687","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:06:12.985362Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:19.987502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:19.988725Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.988786Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-09-29T12:06:19.988846Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988841Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988905Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:06:19.988928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.988887Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988834Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988967Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:06:19.988984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.990811Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-09-29T12:06:19.990897Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.990936Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-29T12:06:19.990951Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-031687","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 12:16:13 up  1:58,  0 users,  load average: 1.28, 1.40, 2.25
	Linux embed-certs-031687 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [312c71e7e109] <==
	W0929 12:12:35.308597       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:12:35.308646       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:12:35.308661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:12:35.309679       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:12:35.309762       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:12:35.309776       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:12:39.580090       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:12:44.826014       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:13:48.646723       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:13:58.539243       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:14:35.309752       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:14:35.309819       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:14:35.309834       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:14:35.309962       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:14:35.310058       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:14:35.310858       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:14:55.253652       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:15:20.965390       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:15:56.805948       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [f99b1cd1736c] <==
	W0929 12:06:22.173248       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.225623       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.250487       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.260995       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.321634       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.322890       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.326206       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.326466       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.442261       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.483047       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.522776       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.563322       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.625065       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.628446       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.641087       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.695925       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.701399       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.713117       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.720508       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.723869       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.744293       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.778299       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.872752       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.907850       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.929310       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0f7e04b4b32c] <==
	I0929 12:05:26.847273       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:05:26.847121       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:05:26.847135       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 12:05:26.847105       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:05:26.847793       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 12:05:26.848253       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:05:26.848648       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:05:26.848934       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 12:05:26.849937       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:05:26.850787       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-031687"
	I0929 12:05:26.850836       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:05:26.850004       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 12:05:26.853527       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 12:05:26.853608       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:05:26.857657       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:05:26.858247       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:26.861353       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:05:26.864042       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:26.868711       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:05:26.876993       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:05:26.885414       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:26.885549       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:05:26.896330       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:26.896353       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:05:26.896362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [916456bc8bfb] <==
	I0929 12:10:07.740291       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:10:37.708691       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:10:37.747203       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:11:07.713517       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:11:07.754118       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:11:37.718262       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:11:37.761710       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:12:07.722052       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:12:07.769041       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:12:37.727343       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:12:37.777083       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:13:07.731964       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:13:07.784666       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:13:37.736686       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:13:37.791521       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:14:07.741111       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:14:07.798504       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:14:37.746708       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:14:37.806512       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:15:07.751925       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:15:07.814323       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:15:37.757009       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:15:37.821710       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:16:07.761513       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:16:07.828834       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [45741390c4ac] <==
	I0929 12:06:35.626310       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:06:35.678839       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:06:35.778998       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:06:35.779050       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 12:06:35.779224       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:06:35.809789       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:06:35.809858       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:06:35.815781       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:06:35.816158       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:06:35.816189       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:35.817699       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:06:35.817726       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:06:35.817754       1 config.go:200] "Starting service config controller"
	I0929 12:06:35.817758       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:06:35.817776       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:06:35.817781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:06:35.818059       1 config.go:309] "Starting node config controller"
	I0929 12:06:35.818074       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:06:35.818080       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:06:35.917829       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:06:35.917843       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:06:35.917890       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b0b17b7d5527] <==
	I0929 12:05:28.818009       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:05:28.893572       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:05:28.993751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:05:28.993806       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 12:05:28.994987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:05:29.041005       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:29.041343       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:05:29.050581       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:05:29.050932       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:05:29.050972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:29.053129       1 config.go:200] "Starting service config controller"
	I0929 12:05:29.053596       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:05:29.053638       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:05:29.053681       1 config.go:309] "Starting node config controller"
	I0929 12:05:29.053697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:05:29.053704       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:05:29.053601       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:05:29.053551       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:05:29.054395       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:05:29.154177       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:05:29.155321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:05:29.155345       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [468b88a7167c] <==
	I0929 12:06:33.005670       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:06:34.259096       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:06:34.259130       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:06:34.259142       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:06:34.259151       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:06:34.291389       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:06:34.291422       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:34.302889       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:34.302927       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:34.303634       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:06:34.304134       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 12:06:34.306186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:06:34.306321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:06:34.306427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:06:34.306595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:06:34.306663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:06:34.308951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:06:34.308955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:06:34.309120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0929 12:06:34.403997       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [90223f818ad9] <==
	E0929 12:05:19.878776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:19.879262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:05:19.879317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:19.879326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:19.879408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:05:19.879413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:19.879442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:19.879997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:05:19.880055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:20.787690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:20.796703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:20.797675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:05:20.828729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:05:20.938370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:21.036990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:21.144021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:21.199845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:21.216120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:05:23.571597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:12.996246       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:12.996364       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:06:12.997894       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:06:12.998708       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:06:12.998719       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:06:12.998740       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 12:14:23 embed-certs-031687 kubelet[1366]: E0929 12:14:23.907118    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:14:27 embed-certs-031687 kubelet[1366]: E0929 12:14:27.906306    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:14:33 embed-certs-031687 kubelet[1366]: E0929 12:14:33.906173    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:14:35 embed-certs-031687 kubelet[1366]: E0929 12:14:35.906809    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:14:41 embed-certs-031687 kubelet[1366]: E0929 12:14:41.906568    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:14:47 embed-certs-031687 kubelet[1366]: E0929 12:14:47.908238    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:14:49 embed-certs-031687 kubelet[1366]: E0929 12:14:49.907138    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:14:53 embed-certs-031687 kubelet[1366]: E0929 12:14:53.906365    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:14:59 embed-certs-031687 kubelet[1366]: E0929 12:14:59.911984    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:15:00 embed-certs-031687 kubelet[1366]: E0929 12:15:00.906868    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:15:05 embed-certs-031687 kubelet[1366]: E0929 12:15:05.906514    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:15:11 embed-certs-031687 kubelet[1366]: E0929 12:15:11.907172    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:15:14 embed-certs-031687 kubelet[1366]: E0929 12:15:14.906337    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:15:19 embed-certs-031687 kubelet[1366]: E0929 12:15:19.907046    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:15:24 embed-certs-031687 kubelet[1366]: E0929 12:15:24.906289    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:15:25 embed-certs-031687 kubelet[1366]: E0929 12:15:25.907151    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:15:34 embed-certs-031687 kubelet[1366]: E0929 12:15:34.906967    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:15:35 embed-certs-031687 kubelet[1366]: E0929 12:15:35.907156    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:15:40 embed-certs-031687 kubelet[1366]: E0929 12:15:40.906082    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:15:48 embed-certs-031687 kubelet[1366]: E0929 12:15:48.906908    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:15:49 embed-certs-031687 kubelet[1366]: E0929 12:15:49.906257    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:15:55 embed-certs-031687 kubelet[1366]: E0929 12:15:55.907121    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:16:01 embed-certs-031687 kubelet[1366]: E0929 12:16:01.906984    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:16:03 embed-certs-031687 kubelet[1366]: E0929 12:16:03.906004    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:16:09 embed-certs-031687 kubelet[1366]: E0929 12:16:09.907249    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	
	
	==> storage-provisioner [7cfd570c5c36] <==
	W0929 12:15:48.380158       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:50.384065       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:50.388172       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:52.391050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:52.396085       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:54.399424       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:54.403794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:56.407221       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:56.411282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:58.414473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:15:58.418584       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:00.422376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:00.430741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:02.433624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:02.437735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:04.440586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:04.444737       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:06.447450       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:06.451187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:08.454273       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:08.458203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:10.462081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:10.467794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:12.471131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:12.476113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cd9c371dd739] <==
	I0929 12:06:35.573515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:07:05.575382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031687 -n embed-certs-031687
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-031687 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-031687 describe pod metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-031687 describe pod metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7: exit status 1 (62.161359ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-w5slh" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-77hqb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-l9zp7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-031687 describe pod metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5bdqx" [d037c2d3-033d-420d-b665-eef2dd2e36bd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:07:23.535135  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:30.276394  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:39.268027  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:44.016696  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.164798  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.171145  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.182508  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.204527  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.245903  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.327379  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.488930  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:51.810928  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:52.453105  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:53.734454  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:56.296554  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:56.889841  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:07:59.630213  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:01.418695  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:11.660447  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:24.978580  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:32.141845  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:38.763344  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:38.769709  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:38.781074  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:38.802466  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:38.843917  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:38.925382  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:39.087341  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:39.409007  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:40.050401  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:41.332133  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:43.894328  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:49.016173  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:08:59.257979  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:01.189966  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:03.667657  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:13.103863  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:19.739692  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.245819  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.252143  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.263567  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.285008  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.326448  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.407863  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.569583  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.588091  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:20.891748  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:21.534089  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:22.815366  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.295051  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.301531  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.312964  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.334457  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.375921  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.377086  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.457563  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.619599  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:25.941421  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:26.582889  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:27.865105  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:30.426505  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:30.499024  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:35.548808  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:40.741353  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:45.791013  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:46.414894  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:46.899998  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:48.288920  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:09:53.819040  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:00.701909  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:01.223214  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:06.272868  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:14.118671  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:15.768800  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:35.025332  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:42.185124  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:43.471859  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:10:47.234571  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:11:17.325856  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:11:22.624085  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:11:45.031854  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:12:03.042609  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:12:04.107246  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:12:09.156141  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:12:30.742190  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:12:51.164087  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:13:18.867021  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:13:38.762491  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:06.466261  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:20.246302  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:20.587564  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:25.295100  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:46.414963  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:47.949178  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:48.288838  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:52.998098  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:14:53.819722  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-306088 -n no-preload-306088
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:16:23.315428995 +0000 UTC m=+3865.359241093
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-306088 describe po kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-306088 describe po kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-5bdqx
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-306088/192.168.94.2
Start Time:       Mon, 29 Sep 2025 12:06:50 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ch8sn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ch8sn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m33s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx to no-preload-306088
Normal   Pulling    6m33s (x5 over 9m33s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m33s (x5 over 9m33s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m33s (x5 over 9m33s)   kubelet            Error: ErrImagePull
Normal   BackOff    4m29s (x21 over 9m32s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     4m29s (x21 over 9m32s)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-306088 logs kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-306088 logs kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard: exit status 1 (74.347705ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-5bdqx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-306088 logs kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-306088
helpers_test.go:243: (dbg) docker inspect no-preload-306088:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831",
	        "Created": "2025-09-29T12:05:02.667478034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 871291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:06:36.757597432Z",
	            "FinishedAt": "2025-09-29T12:06:35.903235818Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/hosts",
	        "LogPath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831-json.log",
	        "Name": "/no-preload-306088",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-306088:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-306088",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831",
	                "LowerDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-306088",
	                "Source": "/var/lib/docker/volumes/no-preload-306088/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-306088",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-306088",
	                "name.minikube.sigs.k8s.io": "no-preload-306088",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "822fcef146208b224a6c528e2a9c025368dead1e675b20806d784d7d4441cf14",
	            "SandboxKey": "/var/run/docker/netns/822fcef14620",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-306088": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:52:f0:1a:bb:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d4ca4f1377a2f0c0999137059d5401179046ae6f170d7c85e62172b83a4ca5f9",
	                    "EndpointID": "44ede35c9e6699d3227d04b88944365e41464e7493fbda822be9e4cfdf17738f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-306088",
	                        "0f0cd5d8dce4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-306088 -n no-preload-306088
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-306088 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-306088 logs -n 25: (1.097080247s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p calico-934155 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo cat /etc/containerd/config.toml                                                                                                                                                                                           │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo containerd config dump                                                                                                                                                                                                    │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p disable-driver-mounts-929504                                                                                                                                                                                                                 │ disable-driver-mounts-929504 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │                     │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl cat crio --no-pager                                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                   │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo crio config                                                                                                                                                                                                               │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p calico-934155                                                                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-858855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ stop    │ -p default-k8s-diff-port-414542 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-031687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p embed-certs-031687 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-031687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p no-preload-306088 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:06:36.516482  871091 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:06:36.516771  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.516782  871091 out.go:374] Setting ErrFile to fd 2...
	I0929 12:06:36.516786  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.517034  871091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:06:36.517566  871091 out.go:368] Setting JSON to false
	I0929 12:06:36.519099  871091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6540,"bootTime":1759141056,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:06:36.519186  871091 start.go:140] virtualization: kvm guest
	I0929 12:06:36.521306  871091 out.go:179] * [no-preload-306088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:06:36.522994  871091 notify.go:220] Checking for updates...
	I0929 12:06:36.523025  871091 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:06:36.524361  871091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:06:36.526212  871091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:36.527856  871091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:06:36.529330  871091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:06:36.530640  871091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:06:36.532489  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:36.532971  871091 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:06:36.557847  871091 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:06:36.557955  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.619389  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.606711858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.619500  871091 docker.go:318] overlay module found
	I0929 12:06:36.621623  871091 out.go:179] * Using the docker driver based on existing profile
	I0929 12:06:36.622958  871091 start.go:304] selected driver: docker
	I0929 12:06:36.622977  871091 start.go:924] validating driver "docker" against &{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.623069  871091 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:06:36.623939  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.681042  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.670856635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.681348  871091 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:36.681383  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:36.681440  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:36.681496  871091 start.go:348] cluster config:
	{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.683409  871091 out.go:179] * Starting "no-preload-306088" primary control-plane node in "no-preload-306088" cluster
	I0929 12:06:36.684655  871091 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:06:36.685791  871091 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:06:36.686923  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:36.687033  871091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:06:36.687071  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:36.687230  871091 cache.go:107] acquiring lock: {Name:mk458b8403b4159d98f7ca606060a1e77262160a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687232  871091 cache.go:107] acquiring lock: {Name:mkf63d99dbdfbf068ef033ecf191a655730e20a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687337  871091 cache.go:107] acquiring lock: {Name:mkd9e4857d62d04bc7d49138f7e4fb0f42e97bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687338  871091 cache.go:107] acquiring lock: {Name:mk4450faafd650ccd11a718cb9b7190d17ab5337 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687401  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 12:06:36.687412  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 12:06:36.687392  871091 cache.go:107] acquiring lock: {Name:mkbcd57035e12e42444c6b36c8f1b923cbfef46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687414  871091 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 202.746µs
	I0929 12:06:36.687421  871091 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 90.507µs
	I0929 12:06:36.687399  871091 cache.go:107] acquiring lock: {Name:mkde0ed0d421c77cb34c222a8ab10a2c13e3e1ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687387  871091 cache.go:107] acquiring lock: {Name:mk11769872d039acf11fe2041fd2e18abd2ae3a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687446  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 12:06:36.687455  871091 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 64.616µs
	I0929 12:06:36.687464  871091 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 12:06:36.687467  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 12:06:36.687476  871091 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 144.146µs
	I0929 12:06:36.687484  871091 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 12:06:36.687374  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 12:06:36.687507  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 12:06:36.687466  871091 cache.go:107] acquiring lock: {Name:mk481f9282d27c94586ac987d8a6cd5ea0f1d68c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687587  871091 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.629µs
	I0929 12:06:36.687586  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 12:06:36.687603  871091 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 12:06:36.687581  871091 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 346.559µs
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 12:06:36.687607  871091 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 276.399µs
	I0929 12:06:36.687618  871091 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 12:06:36.687620  871091 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 12:06:36.687628  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 12:06:36.687644  871091 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 230.083µs
	I0929 12:06:36.687655  871091 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 12:06:36.687663  871091 cache.go:87] Successfully saved all images to host disk.
	I0929 12:06:36.709009  871091 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:06:36.709031  871091 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:06:36.709049  871091 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:06:36.709083  871091 start.go:360] acquireMachinesLock for no-preload-306088: {Name:mk0ed8d49a268e0ff510517b50934257047b58c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.709145  871091 start.go:364] duration metric: took 44.22µs to acquireMachinesLock for "no-preload-306088"
	I0929 12:06:36.709171  871091 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:06:36.709180  871091 fix.go:54] fixHost starting: 
	I0929 12:06:36.709410  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:36.728528  871091 fix.go:112] recreateIfNeeded on no-preload-306088: state=Stopped err=<nil>
	W0929 12:06:36.728557  871091 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 12:06:33.757650  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:35.757705  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:34.860020  866509 addons.go:514] duration metric: took 2.511095137s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:34.860298  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:34.860316  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.355994  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.362405  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:35.362444  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.855983  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.860174  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 12:06:35.861328  866509 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:35.861365  866509 api_server.go:131] duration metric: took 1.00564321s to wait for apiserver health ...
	I0929 12:06:35.861375  866509 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:35.865988  866509 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:35.866018  866509 system_pods.go:61] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.866030  866509 system_pods.go:61] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.866041  866509 system_pods.go:61] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.866050  866509 system_pods.go:61] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.866055  866509 system_pods.go:61] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.866062  866509 system_pods.go:61] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.866068  866509 system_pods.go:61] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.866076  866509 system_pods.go:61] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.866083  866509 system_pods.go:74] duration metric: took 4.69699ms to wait for pod list to return data ...
	I0929 12:06:35.866093  866509 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:35.868695  866509 default_sa.go:45] found service account: "default"
	I0929 12:06:35.868715  866509 default_sa.go:55] duration metric: took 2.61564ms for default service account to be created ...
	I0929 12:06:35.868726  866509 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:35.872060  866509 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:35.872097  866509 system_pods.go:89] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.872135  866509 system_pods.go:89] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.872153  866509 system_pods.go:89] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.872164  866509 system_pods.go:89] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.872173  866509 system_pods.go:89] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.872187  866509 system_pods.go:89] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.872200  866509 system_pods.go:89] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.872215  866509 system_pods.go:89] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.872229  866509 system_pods.go:126] duration metric: took 3.496882ms to wait for k8s-apps to be running ...
	I0929 12:06:35.872241  866509 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:35.872298  866509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:35.886596  866509 system_svc.go:56] duration metric: took 14.342667ms WaitForService to wait for kubelet
	I0929 12:06:35.886631  866509 kubeadm.go:578] duration metric: took 3.537789699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:35.886658  866509 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:35.889756  866509 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:35.889792  866509 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:35.889815  866509 node_conditions.go:105] duration metric: took 3.143621ms to run NodePressure ...
	I0929 12:06:35.889827  866509 start.go:241] waiting for startup goroutines ...
	I0929 12:06:35.889846  866509 start.go:246] waiting for cluster config update ...
	I0929 12:06:35.889860  866509 start.go:255] writing updated cluster config ...
	I0929 12:06:35.890142  866509 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:35.893992  866509 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:35.898350  866509 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:37.904542  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:36.730585  871091 out.go:252] * Restarting existing docker container for "no-preload-306088" ...
	I0929 12:06:36.730671  871091 cli_runner.go:164] Run: docker start no-preload-306088
	I0929 12:06:36.986434  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:37.007128  871091 kic.go:430] container "no-preload-306088" state is running.
	I0929 12:06:37.007513  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:37.028527  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:37.028818  871091 machine.go:93] provisionDockerMachine start ...
	I0929 12:06:37.028949  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:37.047803  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:37.048197  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:37.048230  871091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:06:37.048917  871091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35296->127.0.0.1:33523: read: connection reset by peer
	I0929 12:06:40.187221  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.187251  871091 ubuntu.go:182] provisioning hostname "no-preload-306088"
	I0929 12:06:40.187303  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.206043  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.206254  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.206273  871091 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-306088 && echo "no-preload-306088" | sudo tee /etc/hostname
	I0929 12:06:40.358816  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.358923  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.377596  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.377950  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.377981  871091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-306088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-306088/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-306088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:06:40.514897  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:40.514933  871091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:06:40.514962  871091 ubuntu.go:190] setting up certificates
	I0929 12:06:40.514972  871091 provision.go:84] configureAuth start
	I0929 12:06:40.515033  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:40.534028  871091 provision.go:143] copyHostCerts
	I0929 12:06:40.534112  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:06:40.534132  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:06:40.534221  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:06:40.534378  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:06:40.534391  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:06:40.534433  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:06:40.534548  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:06:40.534559  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:06:40.534599  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:06:40.534700  871091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.no-preload-306088 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-306088]
	I0929 12:06:40.796042  871091 provision.go:177] copyRemoteCerts
	I0929 12:06:40.796100  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:06:40.796141  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.814638  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:40.913779  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:06:40.940147  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:06:40.966181  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:06:40.992149  871091 provision.go:87] duration metric: took 477.163201ms to configureAuth
	I0929 12:06:40.992177  871091 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:06:40.992354  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:40.992402  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.010729  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.011015  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.011031  871091 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:06:41.149250  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:06:41.149283  871091 ubuntu.go:71] root file system type: overlay
	I0929 12:06:41.149434  871091 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:06:41.149508  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.169382  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.169625  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.169731  871091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:06:41.327834  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:06:41.327968  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.349146  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.349454  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.349487  871091 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:06:41.500464  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:41.500497  871091 machine.go:96] duration metric: took 4.471659866s to provisionDockerMachine
	I0929 12:06:41.500512  871091 start.go:293] postStartSetup for "no-preload-306088" (driver="docker")
	I0929 12:06:41.500527  871091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:06:41.500590  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:06:41.500647  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	W0929 12:06:38.257066  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.257540  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.404187  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:42.404863  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:41.520904  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.620006  871091 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:06:41.623863  871091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:06:41.623914  871091 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:06:41.623925  871091 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:06:41.623935  871091 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:06:41.623959  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:06:41.624015  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:06:41.624111  871091 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:06:41.624227  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:06:41.634489  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:41.661187  871091 start.go:296] duration metric: took 160.643724ms for postStartSetup
	I0929 12:06:41.661275  871091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:06:41.661317  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.679286  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.773350  871091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:06:41.778053  871091 fix.go:56] duration metric: took 5.068864392s for fixHost
	I0929 12:06:41.778084  871091 start.go:83] releasing machines lock for "no-preload-306088", held for 5.068924928s
	I0929 12:06:41.778174  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:41.796247  871091 ssh_runner.go:195] Run: cat /version.json
	I0929 12:06:41.796329  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.796378  871091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:06:41.796452  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.815939  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.816193  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.990299  871091 ssh_runner.go:195] Run: systemctl --version
	I0929 12:06:41.995288  871091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:06:42.000081  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:06:42.020438  871091 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:06:42.020518  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:06:42.029627  871091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:06:42.029658  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.029697  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.029845  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.046748  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:06:42.057142  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:06:42.067569  871091 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:06:42.067621  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:06:42.078146  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.089207  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:06:42.099515  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.109953  871091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:06:42.119715  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:06:42.130148  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:06:42.140184  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:06:42.151082  871091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:06:42.161435  871091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:06:42.171100  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.243863  871091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:06:42.322789  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.322843  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.322910  871091 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:06:42.336670  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.348890  871091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:06:42.364257  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.376038  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:06:42.387832  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.405901  871091 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:06:42.409515  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:06:42.419370  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:06:42.438082  871091 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:06:42.511679  871091 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:06:42.584368  871091 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:06:42.584521  871091 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:06:42.604074  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:06:42.615691  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.684549  871091 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:06:43.531184  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:06:43.543167  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:06:43.555540  871091 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:06:43.568219  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.580095  871091 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:06:43.648390  871091 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:06:43.718653  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.787645  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:06:43.810310  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:06:43.822583  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.892062  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:06:43.972699  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.985893  871091 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:06:43.985990  871091 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:06:43.990107  871091 start.go:563] Will wait 60s for crictl version
	I0929 12:06:43.990186  871091 ssh_runner.go:195] Run: which crictl
	I0929 12:06:43.993712  871091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:06:44.032208  871091 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:06:44.032285  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.059274  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.086497  871091 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:06:44.086597  871091 cli_runner.go:164] Run: docker network inspect no-preload-306088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:06:44.103997  871091 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 12:06:44.108202  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.121433  871091 kubeadm.go:875] updating cluster {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:06:44.121548  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:44.121582  871091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:06:44.142018  871091 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 12:06:44.142049  871091 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:06:44.142057  871091 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0929 12:06:44.142162  871091 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-306088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:06:44.142214  871091 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:06:44.196459  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:44.196503  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:44.196520  871091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:06:44.196548  871091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-306088 NodeName:no-preload-306088 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:06:44.196683  871091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-306088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:06:44.196744  871091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:06:44.206772  871091 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:06:44.206838  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:06:44.216022  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 12:06:44.234761  871091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:06:44.253842  871091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 12:06:44.274561  871091 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:06:44.278469  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.290734  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.362332  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:44.386713  871091 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088 for IP: 192.168.94.2
	I0929 12:06:44.386744  871091 certs.go:194] generating shared ca certs ...
	I0929 12:06:44.386768  871091 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.386954  871091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:06:44.387011  871091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:06:44.387021  871091 certs.go:256] generating profile certs ...
	I0929 12:06:44.387100  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/client.key
	I0929 12:06:44.387155  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key.eb5a4896
	I0929 12:06:44.387190  871091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key
	I0929 12:06:44.387288  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:06:44.387320  871091 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:06:44.387329  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:06:44.387351  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:06:44.387373  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:06:44.387393  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:06:44.387440  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:44.388149  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:06:44.419158  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:06:44.448205  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:06:44.482979  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:06:44.517557  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:06:44.549867  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:06:44.576134  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:06:44.604658  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 12:06:44.631756  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:06:44.658081  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:06:44.684187  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:06:44.710650  871091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:06:44.729717  871091 ssh_runner.go:195] Run: openssl version
	I0929 12:06:44.735824  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:06:44.745812  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749234  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749293  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.756789  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:06:44.767948  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:06:44.778834  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782611  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782681  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.790603  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:06:44.800010  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:06:44.810306  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814380  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814509  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.822959  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:06:44.834110  871091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:06:44.837912  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:06:44.844692  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:06:44.851275  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:06:44.858576  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:06:44.866396  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:06:44.875491  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:06:44.883074  871091 kubeadm.go:392] StartCluster: {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:44.883211  871091 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:06:44.904790  871091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:06:44.917300  871091 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:06:44.917322  871091 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:06:44.917374  871091 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:06:44.927571  871091 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:06:44.928675  871091 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-306088" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.929373  871091 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-306088" cluster setting kubeconfig missing "no-preload-306088" context setting]
	I0929 12:06:44.930612  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.932840  871091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:06:44.943928  871091 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 12:06:44.943969  871091 kubeadm.go:593] duration metric: took 26.639509ms to restartPrimaryControlPlane
	I0929 12:06:44.943982  871091 kubeadm.go:394] duration metric: took 60.918658ms to StartCluster
	I0929 12:06:44.944003  871091 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.944082  871091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.946478  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.946713  871091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:06:44.946792  871091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:06:44.946942  871091 addons.go:69] Setting storage-provisioner=true in profile "no-preload-306088"
	I0929 12:06:44.946950  871091 addons.go:69] Setting default-storageclass=true in profile "no-preload-306088"
	I0929 12:06:44.946967  871091 addons.go:238] Setting addon storage-provisioner=true in "no-preload-306088"
	I0929 12:06:44.946975  871091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-306088"
	I0929 12:06:44.946990  871091 addons.go:69] Setting metrics-server=true in profile "no-preload-306088"
	I0929 12:06:44.947004  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:44.947018  871091 addons.go:238] Setting addon metrics-server=true in "no-preload-306088"
	I0929 12:06:44.947007  871091 addons.go:69] Setting dashboard=true in profile "no-preload-306088"
	W0929 12:06:44.947027  871091 addons.go:247] addon metrics-server should already be in state true
	I0929 12:06:44.947041  871091 addons.go:238] Setting addon dashboard=true in "no-preload-306088"
	W0929 12:06:44.946976  871091 addons.go:247] addon storage-provisioner should already be in state true
	W0929 12:06:44.947052  871091 addons.go:247] addon dashboard should already be in state true
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947081  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947415  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947557  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947574  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947710  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.949123  871091 out.go:179] * Verifying Kubernetes components...
	I0929 12:06:44.951560  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.983162  871091 addons.go:238] Setting addon default-storageclass=true in "no-preload-306088"
	W0929 12:06:44.983184  871091 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:06:44.983259  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.983409  871091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:06:44.983471  871091 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:06:44.984010  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.984739  871091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:44.984759  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:06:44.984810  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.985006  871091 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:06:44.985094  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:06:44.985115  871091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:06:44.985173  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.989553  871091 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:06:44.990700  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:06:44.990720  871091 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:06:44.990787  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.013082  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.023016  871091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.023045  871091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:06:45.023112  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.023478  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.027093  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.046756  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.088649  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:45.131986  871091 node_ready.go:35] waiting up to 6m0s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:45.142439  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.156825  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:06:45.156854  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:06:45.157091  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:06:45.157113  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:06:45.171641  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.191370  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:06:45.191407  871091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:06:45.191600  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:06:45.191622  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:06:45.225277  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:06:45.225316  871091 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:06:45.227138  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.227166  871091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0929 12:06:45.240720  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.240807  871091 retry.go:31] will retry after 255.439226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.253570  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.253730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:06:45.253752  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:06:45.256592  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.256642  871091 retry.go:31] will retry after 176.530584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.284730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:06:45.284766  871091 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:06:45.315598  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:06:45.315629  871091 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 12:06:45.337290  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.337352  871091 retry.go:31] will retry after 216.448516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.341267  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:06:45.341293  871091 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 12:06:45.367418  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:06:45.367447  871091 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:06:45.394525  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.394579  871091 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:06:45.428230  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.433674  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.496374  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.554373  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 12:06:42.757687  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:45.257903  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:47.043268  871091 node_ready.go:49] node "no-preload-306088" is "Ready"
	I0929 12:06:47.043313  871091 node_ready.go:38] duration metric: took 1.911288329s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:47.043336  871091 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:06:47.043393  871091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:06:47.559973  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.131688912s)
	I0929 12:06:47.560210  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.126485829s)
	I0929 12:06:47.561634  871091 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-306088 addons enable metrics-server
	
	I0929 12:06:47.677198  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.180776144s)
	I0929 12:06:47.677264  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.122845465s)
	I0929 12:06:47.677276  871091 api_server.go:72] duration metric: took 2.730527098s to wait for apiserver process to appear ...
	I0929 12:06:47.677284  871091 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:06:47.677301  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:47.677300  871091 addons.go:479] Verifying addon metrics-server=true in "no-preload-306088"
	I0929 12:06:47.679081  871091 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	W0929 12:06:44.905162  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:47.405106  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:47.680000  871091 addons.go:514] duration metric: took 2.733215653s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:47.681720  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:47.681742  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.178112  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.184346  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:48.184379  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.678093  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.683059  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 12:06:48.684122  871091 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:48.684148  871091 api_server.go:131] duration metric: took 1.006856952s to wait for apiserver health ...
	I0929 12:06:48.684159  871091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:48.686922  871091 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:48.686951  871091 system_pods.go:61] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.686958  871091 system_pods.go:61] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.686972  871091 system_pods.go:61] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.686984  871091 system_pods.go:61] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.686999  871091 system_pods.go:61] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:48.687008  871091 system_pods.go:61] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.687018  871091 system_pods.go:61] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.687024  871091 system_pods.go:61] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.687035  871091 system_pods.go:74] duration metric: took 2.869523ms to wait for pod list to return data ...
	I0929 12:06:48.687047  871091 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:48.690705  871091 default_sa.go:45] found service account: "default"
	I0929 12:06:48.690730  871091 default_sa.go:55] duration metric: took 3.675534ms for default service account to be created ...
	I0929 12:06:48.690740  871091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:48.693650  871091 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:48.693684  871091 system_pods.go:89] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.693693  871091 system_pods.go:89] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.693715  871091 system_pods.go:89] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.693725  871091 system_pods.go:89] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.693733  871091 system_pods.go:89] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running
	I0929 12:06:48.693738  871091 system_pods.go:89] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.693743  871091 system_pods.go:89] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.693753  871091 system_pods.go:89] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.693770  871091 system_pods.go:126] duration metric: took 3.022951ms to wait for k8s-apps to be running ...
	I0929 12:06:48.693778  871091 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:48.693838  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:48.706595  871091 system_svc.go:56] duration metric: took 12.805298ms WaitForService to wait for kubelet
	I0929 12:06:48.706622  871091 kubeadm.go:578] duration metric: took 3.759872419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:48.706643  871091 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:48.709282  871091 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:48.709305  871091 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:48.709317  871091 node_conditions.go:105] duration metric: took 2.669783ms to run NodePressure ...
	I0929 12:06:48.709327  871091 start.go:241] waiting for startup goroutines ...
	I0929 12:06:48.709334  871091 start.go:246] waiting for cluster config update ...
	I0929 12:06:48.709345  871091 start.go:255] writing updated cluster config ...
	I0929 12:06:48.709631  871091 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:48.713435  871091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:48.716857  871091 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:50.722059  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:47.756924  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.757051  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.903749  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:51.904179  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:52.722481  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:55.222976  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:52.257245  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:54.757176  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:56.756246  861376 pod_ready.go:94] pod "coredns-66bc5c9577-zqqdn" is "Ready"
	I0929 12:06:56.756280  861376 pod_ready.go:86] duration metric: took 38.005267391s for pod "coredns-66bc5c9577-zqqdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.761541  861376 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.765343  861376 pod_ready.go:94] pod "etcd-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.765363  861376 pod_ready.go:86] duration metric: took 3.798035ms for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.767218  861376 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.770588  861376 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.770606  861376 pod_ready.go:86] duration metric: took 3.370627ms for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.772342  861376 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.955016  861376 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.955044  861376 pod_ready.go:86] duration metric: took 182.679374ms for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.155127  861376 pod_ready.go:83] waiting for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.555193  861376 pod_ready.go:94] pod "kube-proxy-bspjk" is "Ready"
	I0929 12:06:57.555220  861376 pod_ready.go:86] duration metric: took 400.064967ms for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.755450  861376 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155379  861376 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:58.155405  861376 pod_ready.go:86] duration metric: took 399.927452ms for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155417  861376 pod_ready.go:40] duration metric: took 39.40795228s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:58.201296  861376 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:06:58.203132  861376 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-414542" cluster and "default" namespace by default
	W0929 12:06:53.904220  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:56.404228  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:57.722276  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:00.222038  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:58.904138  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:00.904689  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:03.404607  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:02.722573  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.222722  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.903327  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.903942  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.722224  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.722687  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.904282  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:07:10.403750  866509 pod_ready.go:94] pod "coredns-66bc5c9577-h49hh" is "Ready"
	I0929 12:07:10.403779  866509 pod_ready.go:86] duration metric: took 34.505404913s for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.406142  866509 pod_ready.go:83] waiting for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.409848  866509 pod_ready.go:94] pod "etcd-embed-certs-031687" is "Ready"
	I0929 12:07:10.409884  866509 pod_ready.go:86] duration metric: took 3.705005ms for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.411799  866509 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.415853  866509 pod_ready.go:94] pod "kube-apiserver-embed-certs-031687" is "Ready"
	I0929 12:07:10.415901  866509 pod_ready.go:86] duration metric: took 4.068426ms for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.417734  866509 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.601598  866509 pod_ready.go:94] pod "kube-controller-manager-embed-certs-031687" is "Ready"
	I0929 12:07:10.601629  866509 pod_ready.go:86] duration metric: took 183.870372ms for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.801642  866509 pod_ready.go:83] waiting for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.201791  866509 pod_ready.go:94] pod "kube-proxy-8lx97" is "Ready"
	I0929 12:07:11.201815  866509 pod_ready.go:86] duration metric: took 400.146465ms for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.402190  866509 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802461  866509 pod_ready.go:94] pod "kube-scheduler-embed-certs-031687" is "Ready"
	I0929 12:07:11.802499  866509 pod_ready.go:86] duration metric: took 400.277946ms for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802515  866509 pod_ready.go:40] duration metric: took 35.908487233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:11.853382  866509 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:11.856798  866509 out.go:179] * Done! kubectl is now configured to use "embed-certs-031687" cluster and "default" namespace by default
	W0929 12:07:12.221602  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:14.221842  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:16.222454  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:18.722820  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:20.725000  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	I0929 12:07:21.222494  871091 pod_ready.go:94] pod "coredns-66bc5c9577-llrxw" is "Ready"
	I0929 12:07:21.222527  871091 pod_ready.go:86] duration metric: took 32.505636564s for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.225025  871091 pod_ready.go:83] waiting for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.228512  871091 pod_ready.go:94] pod "etcd-no-preload-306088" is "Ready"
	I0929 12:07:21.228529  871091 pod_ready.go:86] duration metric: took 3.482765ms for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.230262  871091 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.233598  871091 pod_ready.go:94] pod "kube-apiserver-no-preload-306088" is "Ready"
	I0929 12:07:21.233622  871091 pod_ready.go:86] duration metric: took 3.343035ms for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.235393  871091 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.421017  871091 pod_ready.go:94] pod "kube-controller-manager-no-preload-306088" is "Ready"
	I0929 12:07:21.421047  871091 pod_ready.go:86] duration metric: took 185.636666ms for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.621421  871091 pod_ready.go:83] waiting for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.020579  871091 pod_ready.go:94] pod "kube-proxy-79hf6" is "Ready"
	I0929 12:07:22.020611  871091 pod_ready.go:86] duration metric: took 399.163924ms for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.220586  871091 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620444  871091 pod_ready.go:94] pod "kube-scheduler-no-preload-306088" is "Ready"
	I0929 12:07:22.620469  871091 pod_ready.go:86] duration metric: took 399.857006ms for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620481  871091 pod_ready.go:40] duration metric: took 33.907023232s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:22.667955  871091 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:22.669694  871091 out.go:179] * Done! kubectl is now configured to use "no-preload-306088" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 12:08:20 no-preload-306088 dockerd[818]: time="2025-09-29T12:08:20.990329071Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:08:28 no-preload-306088 dockerd[818]: time="2025-09-29T12:08:28.539563610Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:08:28 no-preload-306088 dockerd[818]: time="2025-09-29T12:08:28.590948996Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:08:28 no-preload-306088 dockerd[818]: time="2025-09-29T12:08:28.591043504Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:08:28 no-preload-306088 cri-dockerd[1128]: time="2025-09-29T12:08:28Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:09:41 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:41.583694915Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:09:41 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:41.583745290Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:09:41 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:41.586405037Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:09:41 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:41.586460984Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:09:50 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:50.552727237Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:09:50 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:50.606173609Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:09:50 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:50.606310334Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:09:50 no-preload-306088 cri-dockerd[1128]: time="2025-09-29T12:09:50Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:09:50 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:50.623217128Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:09:50 no-preload-306088 dockerd[818]: time="2025-09-29T12:09:50.652212135Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:12:33 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:33.062742646Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:12:33 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:33.062783272Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:12:33 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:33.064847229Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:12:33 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:33.064895779Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:12:34 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:34.499820106Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:12:34 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:34.530606980Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:12:40 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:40.548105245Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:40 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:40.593524606Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:40 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:40.593633400Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:12:40 no-preload-306088 cri-dockerd[1128]: time="2025-09-29T12:12:40Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6069d4cc945c4       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       2                   650a45c250449       storage-provisioner
	46b841525a645       56cc512116c8f                                                                                         9 minutes ago       Running             busybox                   1                   af0052ac783cc       busybox
	695a8602bc591       52546a367cc9e                                                                                         9 minutes ago       Running             coredns                   1                   0817cfa6d924e       coredns-66bc5c9577-llrxw
	04de2f2efa331       6e38f40d628db                                                                                         9 minutes ago       Exited              storage-provisioner       1                   650a45c250449       storage-provisioner
	63e413deaec6d       df0860106674d                                                                                         9 minutes ago       Running             kube-proxy                1                   5786a938d52ef       kube-proxy-79hf6
	2e89a50fa22a0       46169d968e920                                                                                         9 minutes ago       Running             kube-scheduler            1                   869508ebc6f7f       kube-scheduler-no-preload-306088
	a85939dbef502       5f1f5298c888d                                                                                         9 minutes ago       Running             etcd                      1                   973a42ce3b13d       etcd-no-preload-306088
	7ede5c29532f1       a0af72f2ec6d6                                                                                         9 minutes ago       Running             kube-controller-manager   1                   3d511beab43f5       kube-controller-manager-no-preload-306088
	9703afde994b8       90550c43ad2bc                                                                                         9 minutes ago       Running             kube-apiserver            1                   209a43e67b76e       kube-apiserver-no-preload-306088
	78749e8a0d6c3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              busybox                   0                   757705e35ec09       busybox
	d6c5675d0c4db       52546a367cc9e                                                                                         10 minutes ago      Exited              coredns                   0                   7afc0cf80b590       coredns-66bc5c9577-llrxw
	2ed702618e45b       df0860106674d                                                                                         10 minutes ago      Exited              kube-proxy                0                   2f17d84d2ba37       kube-proxy-79hf6
	7a7e42d61c6cf       90550c43ad2bc                                                                                         10 minutes ago      Exited              kube-apiserver            0                   d732eb1833307       kube-apiserver-no-preload-306088
	58da5b85bf37f       a0af72f2ec6d6                                                                                         10 minutes ago      Exited              kube-controller-manager   0                   ea46b63ce01fc       kube-controller-manager-no-preload-306088
	b128aa5b2b94e       5f1f5298c888d                                                                                         10 minutes ago      Exited              etcd                      0                   c4c68bc2d42e1       etcd-no-preload-306088
	ff7fabe12bd91       46169d968e920                                                                                         10 minutes ago      Exited              kube-scheduler            0                   11596b316c317       kube-scheduler-no-preload-306088
	
	
	==> coredns [695a8602bc59] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44581 - 5114 "HINFO IN 1169221059218682807.6276513997277860298. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020548991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [d6c5675d0c4d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39009 - 55424 "HINFO IN 56200610660337702.1748388028457110117. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.017413364s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-306088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-306088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=no-preload-306088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_05_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:05:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-306088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:16:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:15:48 +0000   Mon, 29 Sep 2025 12:05:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:15:48 +0000   Mon, 29 Sep 2025 12:05:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:15:48 +0000   Mon, 29 Sep 2025 12:05:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:15:48 +0000   Mon, 29 Sep 2025 12:05:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-306088
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b538631cbe7481ba166a7b39bb33163
	  System UUID:                e3735703-9e50-4250-a924-a82c25214cd9
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-llrxw                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     10m
	  kube-system                 etcd-no-preload-306088                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kube-apiserver-no-preload-306088              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-306088     200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-79hf6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-306088              100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-cbm6p               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         9m59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bmfvn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5bdqx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m36s                  kube-proxy       
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node no-preload-306088 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node no-preload-306088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-306088 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           10m                    node-controller  Node no-preload-306088 event: Registered Node no-preload-306088 in Controller
	  Normal  Starting                 9m40s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m40s (x8 over 9m40s)  kubelet          Node no-preload-306088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m40s (x8 over 9m40s)  kubelet          Node no-preload-306088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m40s (x7 over 9m40s)  kubelet          Node no-preload-306088 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m35s                  node-controller  Node no-preload-306088 event: Registered Node no-preload-306088 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e ea 9d d2 75 10 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 02 ed 9c 9f 01 b3 08 06
	[  +7.676274] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	
	
	==> etcd [a85939dbef50] <==
	{"level":"warn","ts":"2025-09-29T12:06:46.406515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.413188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.419400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.428420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.436023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.445398Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.451951Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.460294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.466672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.472833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.479176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.485414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.492081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.498746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.504643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.510868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.530053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.536234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.542639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.548785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.555709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.565537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.572659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.580145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.635662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37512","server-name":"","error":"EOF"}
	
	
	==> etcd [b128aa5b2b94] <==
	{"level":"warn","ts":"2025-09-29T12:05:30.476645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.484557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.492020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.499821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.514208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.528213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.590557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32770","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:06:25.685500Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:06:25.685576Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"no-preload-306088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:06:25.686267Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:32.688844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:32.688948Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.689026Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dfc97eb0aae75b33","current-leader-member-id":"dfc97eb0aae75b33"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689025Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689052Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689074Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689085Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:06:32.689087Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-29T12:06:32.689087Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.689099Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-09-29T12:06:32.689097Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.693452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"error","ts":"2025-09-29T12:06:32.693510Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.693543Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T12:06:32.693553Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"no-preload-306088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	
	
	==> kernel <==
	 12:16:24 up  1:58,  0 users,  load average: 1.40, 1.42, 2.25
	Linux no-preload-306088 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [7a7e42d61c6c] <==
	W0929 12:06:34.632071       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.661092       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.739074       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.766705       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.772151       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.891739       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.929280       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.933672       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.960640       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.973054       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.079867       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.188159       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.189459       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.199851       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.203109       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.380388       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.387433       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.407230       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.424449       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.473223       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.524023       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.551036       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.558146       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.587154       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.591740       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9703afde994b] <==
	I0929 12:11:48.088178       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:12:23.758432       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:12:48.087179       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:12:48.087241       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:12:48.087261       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:12:48.089310       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:12:48.089398       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:12:48.089415       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:13:02.656988       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:13:51.969481       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:14:13.100702       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:14:48.087368       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:14:48.087436       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:14:48.087461       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:14:48.089633       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:14:48.089727       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:14:48.089742       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:15:11.430925       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:15:35.430207       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [58da5b85bf37] <==
	I0929 12:05:38.078703       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:05:38.078838       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 12:05:38.078860       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:05:38.078984       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:05:38.079008       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:05:38.079009       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:05:38.078989       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:05:38.079592       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:05:38.079606       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:05:38.080747       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 12:05:38.082019       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:05:38.082094       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:05:38.082132       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:05:38.082139       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:05:38.082145       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:05:38.083117       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:05:38.084329       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:38.084349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:38.088568       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:05:38.089478       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-306088" podCIDRs=["10.244.0.0/24"]
	I0929 12:05:38.096559       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:05:38.101865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:38.107018       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:38.107033       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:05:38.107048       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [7ede5c29532f] <==
	I0929 12:10:19.792280       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:10:49.753034       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:10:49.799833       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:11:19.757497       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:11:19.807040       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:11:49.762270       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:11:49.814943       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:12:19.766330       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:12:19.821785       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:12:49.771272       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:12:49.829425       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:13:19.775274       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:13:19.836672       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:13:49.779613       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:13:49.843739       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:14:19.784226       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:14:19.850698       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:14:49.788642       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:14:49.857021       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:15:19.793141       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:15:19.864437       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:15:49.797298       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:15:49.872356       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:16:19.801736       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:16:19.879463       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [2ed702618e45] <==
	I0929 12:05:39.837498       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:05:39.943552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:05:40.044666       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:05:40.044966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 12:05:40.045591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:05:40.119388       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:40.119455       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:05:40.133167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:05:40.134809       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:05:40.134834       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:40.137295       1 config.go:200] "Starting service config controller"
	I0929 12:05:40.137327       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:05:40.137561       1 config.go:309] "Starting node config controller"
	I0929 12:05:40.137625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:05:40.138057       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:05:40.138085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:05:40.139064       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:05:40.141993       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:05:40.142014       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:05:40.238427       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:05:40.238444       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:05:40.238465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [63e413deaec6] <==
	I0929 12:06:48.166300       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:06:48.227574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:06:48.327779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:06:48.327841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 12:06:48.328000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:06:48.355101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:06:48.355193       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:06:48.361175       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:06:48.361551       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:06:48.361572       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:48.363070       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:06:48.363239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:06:48.363137       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:06:48.363385       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:06:48.363166       1 config.go:309] "Starting node config controller"
	I0929 12:06:48.363408       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:06:48.363414       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:06:48.363096       1 config.go:200] "Starting service config controller"
	I0929 12:06:48.363465       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:06:48.463686       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:06:48.463718       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:06:48.463732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2e89a50fa22a] <==
	I0929 12:06:45.657172       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:06:47.054776       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:06:47.054807       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:06:47.054820       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:06:47.054830       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:06:47.088813       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:06:47.088847       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:47.092925       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:47.092970       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:47.092972       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:06:47.093624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:06:47.193859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ff7fabe12bd9] <==
	E0929 12:05:31.102158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:05:31.102124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:31.102405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:31.102404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:31.102549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:31.910906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:31.922100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:05:31.953399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:05:32.007111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:32.021511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:32.024706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:05:32.130675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:32.139772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:32.163992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 12:05:32.169052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:05:32.183135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:05:32.199506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:32.207629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:32.291748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0929 12:05:35.396173       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:25.685308       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:06:25.685457       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:06:25.685480       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:06:25.685540       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:06:25.685564       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 12:14:38 no-preload-306088 kubelet[1344]: E0929 12:14:38.485098    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:14:44 no-preload-306088 kubelet[1344]: E0929 12:14:44.484751    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:14:48 no-preload-306088 kubelet[1344]: E0929 12:14:48.484577    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:14:53 no-preload-306088 kubelet[1344]: E0929 12:14:53.484948    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:14:57 no-preload-306088 kubelet[1344]: E0929 12:14:57.484668    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:14:59 no-preload-306088 kubelet[1344]: E0929 12:14:59.484645    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:15:07 no-preload-306088 kubelet[1344]: E0929 12:15:07.484733    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:15:10 no-preload-306088 kubelet[1344]: E0929 12:15:10.484476    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:15:12 no-preload-306088 kubelet[1344]: E0929 12:15:12.485281    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:15:19 no-preload-306088 kubelet[1344]: E0929 12:15:19.485337    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:15:22 no-preload-306088 kubelet[1344]: E0929 12:15:22.485025    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:15:24 no-preload-306088 kubelet[1344]: E0929 12:15:24.485516    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:15:31 no-preload-306088 kubelet[1344]: E0929 12:15:31.484468    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:15:35 no-preload-306088 kubelet[1344]: E0929 12:15:35.484966    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:15:36 no-preload-306088 kubelet[1344]: E0929 12:15:36.484675    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:15:42 no-preload-306088 kubelet[1344]: E0929 12:15:42.485106    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:15:47 no-preload-306088 kubelet[1344]: E0929 12:15:47.484678    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:15:48 no-preload-306088 kubelet[1344]: E0929 12:15:48.485119    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:15:57 no-preload-306088 kubelet[1344]: E0929 12:15:57.484923    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:16:00 no-preload-306088 kubelet[1344]: E0929 12:16:00.491093    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:16:02 no-preload-306088 kubelet[1344]: E0929 12:16:02.490576    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:16:10 no-preload-306088 kubelet[1344]: E0929 12:16:10.485032    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:16:14 no-preload-306088 kubelet[1344]: E0929 12:16:14.486657    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:16:16 no-preload-306088 kubelet[1344]: E0929 12:16:16.485321    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:16:22 no-preload-306088 kubelet[1344]: E0929 12:16:22.485479    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	
	
	==> storage-provisioner [04de2f2efa33] <==
	I0929 12:06:48.101582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:07:18.104409       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6069d4cc945c] <==
	W0929 12:15:59.958270       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:01.961970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:01.966075       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:03.969096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:03.972939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:05.976087       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:05.981354       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:07.984846       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:07.988929       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:09.992196       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:09.996694       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:12.000598       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:12.004607       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:14.007376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:14.012254       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:16.014841       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:16.018870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:18.022448       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:18.026949       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:20.030508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:20.034471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:22.037506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:22.042714       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:24.046274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:16:24.050074       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-306088 -n no-preload-306088
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-306088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-306088 describe pod metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-306088 describe pod metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx: exit status 1 (62.353572ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-cbm6p" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bmfvn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5bdqx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-306088 describe pod metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.40s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-schbp" [71e083e1-076b-456d-a95a-397cfbfe8d83] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:15:15.768621  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858855 -n old-k8s-version-858855
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:24:03.875447113 +0000 UTC m=+4325.919259197
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-858855 describe po kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-858855 describe po kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-schbp
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-858855/192.168.103.2
Start Time:       Mon, 29 Sep 2025 12:05:38 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-kl8b9 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-kl8b9:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp to old-k8s-version-858855
Normal   Pulling    16m (x4 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     16m (x4 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     16m (x4 over 18m)     kubelet            Error: ErrImagePull
Warning  Failed     16m (x6 over 18m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    3m13s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-858855 logs kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-858855 logs kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard: exit status 1 (75.657873ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-schbp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-858855 logs kubernetes-dashboard-8694d4445c-schbp -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-858855 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-858855
helpers_test.go:243: (dbg) docker inspect old-k8s-version-858855:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c",
	        "Created": "2025-09-29T12:04:12.432746747Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 848504,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:05:11.600077832Z",
	            "FinishedAt": "2025-09-29T12:05:08.494386589Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/hostname",
	        "HostsPath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/hosts",
	        "LogPath": "/var/lib/docker/containers/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c/d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c-json.log",
	        "Name": "/old-k8s-version-858855",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "old-k8s-version-858855:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-858855",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d6b6af9eccb6a7308234424275193660122ac265befe394d81bbc74c860a7b6c",
	                "LowerDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0a60eea2246e69e0d62749692c852ae3f73ff2acf16c594adc8f9f5ab1393474/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-858855",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-858855/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-858855",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-858855",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-858855",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "958645a5cf70775cbc4b388fdca21a8651ae97a68e0715bac2cb7fe22819a059",
	            "SandboxKey": "/var/run/docker/netns/958645a5cf70",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33503"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33504"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33507"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33505"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33506"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-858855": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7e:5d:d7:35:91:44",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "f0c7082fdeaedacd6a814f0adb6da2805a722459cf4db770dd9f882e32c523fb",
	                    "EndpointID": "d518ee8a6c050656fbaaa4d067f30895a0728c93aef673bb6f46794dbaae4e7f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-858855",
	                        "d6b6af9eccb6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858855 -n old-k8s-version-858855
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-858855 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-858855 logs -n 25: (1.133008722s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ ssh     │ -p calico-934155 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo cat /etc/containerd/config.toml                                                                                                                                                                                           │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo containerd config dump                                                                                                                                                                                                    │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p disable-driver-mounts-929504                                                                                                                                                                                                                 │ disable-driver-mounts-929504 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │                     │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ ssh     │ -p calico-934155 sudo systemctl cat crio --no-pager                                                                                                                                                                                             │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                   │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ ssh     │ -p calico-934155 sudo crio config                                                                                                                                                                                                               │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ delete  │ -p calico-934155                                                                                                                                                                                                                                │ calico-934155                │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-858855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ stop    │ -p default-k8s-diff-port-414542 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-031687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p embed-certs-031687 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-031687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p no-preload-306088 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:06:36
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:06:36.516482  871091 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:06:36.516771  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.516782  871091 out.go:374] Setting ErrFile to fd 2...
	I0929 12:06:36.516786  871091 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:06:36.517034  871091 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:06:36.517566  871091 out.go:368] Setting JSON to false
	I0929 12:06:36.519099  871091 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":6540,"bootTime":1759141056,"procs":388,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:06:36.519186  871091 start.go:140] virtualization: kvm guest
	I0929 12:06:36.521306  871091 out.go:179] * [no-preload-306088] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:06:36.522994  871091 notify.go:220] Checking for updates...
	I0929 12:06:36.523025  871091 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:06:36.524361  871091 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:06:36.526212  871091 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:36.527856  871091 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:06:36.529330  871091 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:06:36.530640  871091 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:06:36.532489  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:36.532971  871091 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:06:36.557847  871091 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:06:36.557955  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.619389  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.606711858 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.619500  871091 docker.go:318] overlay module found
	I0929 12:06:36.621623  871091 out.go:179] * Using the docker driver based on existing profile
	I0929 12:06:36.622958  871091 start.go:304] selected driver: docker
	I0929 12:06:36.622977  871091 start.go:924] validating driver "docker" against &{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.623069  871091 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:06:36.623939  871091 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:06:36.681042  871091 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-29 12:06:36.670856635 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:06:36.681348  871091 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:36.681383  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:36.681440  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:36.681496  871091 start.go:348] cluster config:
	{Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:36.683409  871091 out.go:179] * Starting "no-preload-306088" primary control-plane node in "no-preload-306088" cluster
	I0929 12:06:36.684655  871091 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:06:36.685791  871091 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:06:36.686923  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:36.687033  871091 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:06:36.687071  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:36.687230  871091 cache.go:107] acquiring lock: {Name:mk458b8403b4159d98f7ca606060a1e77262160a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687232  871091 cache.go:107] acquiring lock: {Name:mkf63d99dbdfbf068ef033ecf191a655730e20a6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687337  871091 cache.go:107] acquiring lock: {Name:mkd9e4857d62d04bc7d49138f7e4fb0f42e97bee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687338  871091 cache.go:107] acquiring lock: {Name:mk4450faafd650ccd11a718cb9b7190d17ab5337 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687401  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 12:06:36.687412  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 12:06:36.687392  871091 cache.go:107] acquiring lock: {Name:mkbcd57035e12e42444c6b36c8f1b923cbfef46a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687414  871091 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0" took 202.746µs
	I0929 12:06:36.687421  871091 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0" took 90.507µs
	I0929 12:06:36.687399  871091 cache.go:107] acquiring lock: {Name:mkde0ed0d421c77cb34c222a8ab10a2c13e3e1ac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687387  871091 cache.go:107] acquiring lock: {Name:mk11769872d039acf11fe2041fd2e18abd2ae3a8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687446  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I0929 12:06:36.687455  871091 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 64.616µs
	I0929 12:06:36.687464  871091 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 12:06:36.687467  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 12:06:36.687476  871091 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 144.146µs
	I0929 12:06:36.687484  871091 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 12:06:36.687374  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 12:06:36.687507  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 12:06:36.687466  871091 cache.go:107] acquiring lock: {Name:mk481f9282d27c94586ac987d8a6cd5ea0f1d68c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.687587  871091 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 226.629µs
	I0929 12:06:36.687586  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 12:06:36.687603  871091 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 12:06:36.687581  871091 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 346.559µs
	I0929 12:06:36.687431  871091 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 12:06:36.687607  871091 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0" took 276.399µs
	I0929 12:06:36.687618  871091 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 12:06:36.687620  871091 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 12:06:36.687628  871091 cache.go:115] /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 12:06:36.687644  871091 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0" took 230.083µs
	I0929 12:06:36.687655  871091 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21655-357219/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 12:06:36.687663  871091 cache.go:87] Successfully saved all images to host disk.
	I0929 12:06:36.709009  871091 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:06:36.709031  871091 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:06:36.709049  871091 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:06:36.709083  871091 start.go:360] acquireMachinesLock for no-preload-306088: {Name:mk0ed8d49a268e0ff510517b50934257047b58c0 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:06:36.709145  871091 start.go:364] duration metric: took 44.22µs to acquireMachinesLock for "no-preload-306088"
	I0929 12:06:36.709171  871091 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:06:36.709180  871091 fix.go:54] fixHost starting: 
	I0929 12:06:36.709410  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:36.728528  871091 fix.go:112] recreateIfNeeded on no-preload-306088: state=Stopped err=<nil>
	W0929 12:06:36.728557  871091 fix.go:138] unexpected machine state, will restart: <nil>
	W0929 12:06:33.757650  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:35.757705  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:34.860020  866509 addons.go:514] duration metric: took 2.511095137s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:34.860298  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:34.860316  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.355994  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.362405  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:35.362444  866509 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:35.855983  866509 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 12:06:35.860174  866509 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 12:06:35.861328  866509 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:35.861365  866509 api_server.go:131] duration metric: took 1.00564321s to wait for apiserver health ...
	I0929 12:06:35.861375  866509 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:35.865988  866509 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:35.866018  866509 system_pods.go:61] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.866030  866509 system_pods.go:61] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.866041  866509 system_pods.go:61] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.866050  866509 system_pods.go:61] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.866055  866509 system_pods.go:61] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.866062  866509 system_pods.go:61] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.866068  866509 system_pods.go:61] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.866076  866509 system_pods.go:61] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.866083  866509 system_pods.go:74] duration metric: took 4.69699ms to wait for pod list to return data ...
	I0929 12:06:35.866093  866509 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:35.868695  866509 default_sa.go:45] found service account: "default"
	I0929 12:06:35.868715  866509 default_sa.go:55] duration metric: took 2.61564ms for default service account to be created ...
	I0929 12:06:35.868726  866509 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:35.872060  866509 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:35.872097  866509 system_pods.go:89] "coredns-66bc5c9577-h49hh" [99200b44-2a49-48f0-8c10-6da3efcb3cca] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:35.872135  866509 system_pods.go:89] "etcd-embed-certs-031687" [388cf00b-70e7-4e02-ba3b-42776bf833a1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:35.872153  866509 system_pods.go:89] "kube-apiserver-embed-certs-031687" [fd557c56-622e-4f18-8105-c613b75a3ede] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:35.872164  866509 system_pods.go:89] "kube-controller-manager-embed-certs-031687" [7f2bcfd8-f723-4eed-877c-a56cc50f963b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:35.872173  866509 system_pods.go:89] "kube-proxy-8lx97" [0d35dad9-e907-40a9-b0ce-dd138652494e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:35.872187  866509 system_pods.go:89] "kube-scheduler-embed-certs-031687" [8b05ddd8-a862-4a86-b6d1-e634c47fea96] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:35.872200  866509 system_pods.go:89] "metrics-server-746fcd58dc-w5slh" [f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:35.872215  866509 system_pods.go:89] "storage-provisioner" [701aa6c1-3243-4f77-914c-339f69aa9ca5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 12:06:35.872229  866509 system_pods.go:126] duration metric: took 3.496882ms to wait for k8s-apps to be running ...
	I0929 12:06:35.872241  866509 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:35.872298  866509 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:35.886596  866509 system_svc.go:56] duration metric: took 14.342667ms WaitForService to wait for kubelet
	I0929 12:06:35.886631  866509 kubeadm.go:578] duration metric: took 3.537789699s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:35.886658  866509 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:35.889756  866509 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:35.889792  866509 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:35.889815  866509 node_conditions.go:105] duration metric: took 3.143621ms to run NodePressure ...
	I0929 12:06:35.889827  866509 start.go:241] waiting for startup goroutines ...
	I0929 12:06:35.889846  866509 start.go:246] waiting for cluster config update ...
	I0929 12:06:35.889860  866509 start.go:255] writing updated cluster config ...
	I0929 12:06:35.890142  866509 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:35.893992  866509 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:35.898350  866509 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:37.904542  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:36.730585  871091 out.go:252] * Restarting existing docker container for "no-preload-306088" ...
	I0929 12:06:36.730671  871091 cli_runner.go:164] Run: docker start no-preload-306088
	I0929 12:06:36.986434  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:37.007128  871091 kic.go:430] container "no-preload-306088" state is running.
	I0929 12:06:37.007513  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:37.028527  871091 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/config.json ...
	I0929 12:06:37.028818  871091 machine.go:93] provisionDockerMachine start ...
	I0929 12:06:37.028949  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:37.047803  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:37.048197  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:37.048230  871091 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:06:37.048917  871091 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:35296->127.0.0.1:33523: read: connection reset by peer
	I0929 12:06:40.187221  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.187251  871091 ubuntu.go:182] provisioning hostname "no-preload-306088"
	I0929 12:06:40.187303  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.206043  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.206254  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.206273  871091 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-306088 && echo "no-preload-306088" | sudo tee /etc/hostname
	I0929 12:06:40.358816  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-306088
	
	I0929 12:06:40.358923  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.377596  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:40.377950  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:40.377981  871091 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-306088' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-306088/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-306088' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:06:40.514897  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:40.514933  871091 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:06:40.514962  871091 ubuntu.go:190] setting up certificates
	I0929 12:06:40.514972  871091 provision.go:84] configureAuth start
	I0929 12:06:40.515033  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:40.534028  871091 provision.go:143] copyHostCerts
	I0929 12:06:40.534112  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:06:40.534132  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:06:40.534221  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:06:40.534378  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:06:40.534391  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:06:40.534433  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:06:40.534548  871091 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:06:40.534559  871091 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:06:40.534599  871091 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:06:40.534700  871091 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.no-preload-306088 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-306088]
	I0929 12:06:40.796042  871091 provision.go:177] copyRemoteCerts
	I0929 12:06:40.796100  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:06:40.796141  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:40.814638  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:40.913779  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:06:40.940147  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:06:40.966181  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:06:40.992149  871091 provision.go:87] duration metric: took 477.163201ms to configureAuth
	I0929 12:06:40.992177  871091 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:06:40.992354  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:40.992402  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.010729  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.011015  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.011031  871091 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:06:41.149250  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:06:41.149283  871091 ubuntu.go:71] root file system type: overlay
	I0929 12:06:41.149434  871091 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:06:41.149508  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.169382  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.169625  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.169731  871091 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:06:41.327834  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:06:41.327968  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.349146  871091 main.go:141] libmachine: Using SSH client type: native
	I0929 12:06:41.349454  871091 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33523 <nil> <nil>}
	I0929 12:06:41.349487  871091 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:06:41.500464  871091 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:06:41.500497  871091 machine.go:96] duration metric: took 4.471659866s to provisionDockerMachine
	I0929 12:06:41.500512  871091 start.go:293] postStartSetup for "no-preload-306088" (driver="docker")
	I0929 12:06:41.500527  871091 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:06:41.500590  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:06:41.500647  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	W0929 12:06:38.257066  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.257540  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:40.404187  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:42.404863  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:41.520904  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.620006  871091 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:06:41.623863  871091 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:06:41.623914  871091 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:06:41.623925  871091 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:06:41.623935  871091 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:06:41.623959  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:06:41.624015  871091 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:06:41.624111  871091 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:06:41.624227  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:06:41.634489  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:41.661187  871091 start.go:296] duration metric: took 160.643724ms for postStartSetup
	I0929 12:06:41.661275  871091 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:06:41.661317  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.679286  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.773350  871091 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:06:41.778053  871091 fix.go:56] duration metric: took 5.068864392s for fixHost
	I0929 12:06:41.778084  871091 start.go:83] releasing machines lock for "no-preload-306088", held for 5.068924928s
	I0929 12:06:41.778174  871091 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-306088
	I0929 12:06:41.796247  871091 ssh_runner.go:195] Run: cat /version.json
	I0929 12:06:41.796329  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.796378  871091 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:06:41.796452  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:41.815939  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.816193  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:41.990299  871091 ssh_runner.go:195] Run: systemctl --version
	I0929 12:06:41.995288  871091 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:06:42.000081  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:06:42.020438  871091 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:06:42.020518  871091 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:06:42.029627  871091 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:06:42.029658  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.029697  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.029845  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.046748  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:06:42.057142  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:06:42.067569  871091 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:06:42.067621  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:06:42.078146  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.089207  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:06:42.099515  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:06:42.109953  871091 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:06:42.119715  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:06:42.130148  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:06:42.140184  871091 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:06:42.151082  871091 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:06:42.161435  871091 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:06:42.171100  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.243863  871091 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:06:42.322789  871091 start.go:495] detecting cgroup driver to use...
	I0929 12:06:42.322843  871091 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:06:42.322910  871091 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:06:42.336670  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.348890  871091 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:06:42.364257  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:06:42.376038  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:06:42.387832  871091 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:06:42.405901  871091 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:06:42.409515  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:06:42.419370  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:06:42.438082  871091 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:06:42.511679  871091 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:06:42.584368  871091 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:06:42.584521  871091 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:06:42.604074  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:06:42.615691  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:42.684549  871091 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:06:43.531184  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:06:43.543167  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:06:43.555540  871091 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:06:43.568219  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.580095  871091 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:06:43.648390  871091 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:06:43.718653  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.787645  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:06:43.810310  871091 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:06:43.822583  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:43.892062  871091 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:06:43.972699  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:06:43.985893  871091 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:06:43.985990  871091 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:06:43.990107  871091 start.go:563] Will wait 60s for crictl version
	I0929 12:06:43.990186  871091 ssh_runner.go:195] Run: which crictl
	I0929 12:06:43.993712  871091 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:06:44.032208  871091 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:06:44.032285  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.059274  871091 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:06:44.086497  871091 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:06:44.086597  871091 cli_runner.go:164] Run: docker network inspect no-preload-306088 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:06:44.103997  871091 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I0929 12:06:44.108202  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.121433  871091 kubeadm.go:875] updating cluster {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:06:44.121548  871091 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:06:44.121582  871091 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:06:44.142018  871091 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 12:06:44.142049  871091 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:06:44.142057  871091 kubeadm.go:926] updating node { 192.168.94.2 8443 v1.34.0 docker true true} ...
	I0929 12:06:44.142162  871091 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-306088 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:06:44.142214  871091 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:06:44.196459  871091 cni.go:84] Creating CNI manager for ""
	I0929 12:06:44.196503  871091 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:06:44.196520  871091 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 12:06:44.196548  871091 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-306088 NodeName:no-preload-306088 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:06:44.196683  871091 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-306088"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:06:44.196744  871091 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:06:44.206772  871091 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:06:44.206838  871091 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:06:44.216022  871091 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 12:06:44.234761  871091 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:06:44.253842  871091 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2217 bytes)
	I0929 12:06:44.274561  871091 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:06:44.278469  871091 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:06:44.290734  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.362332  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:44.386713  871091 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088 for IP: 192.168.94.2
	I0929 12:06:44.386744  871091 certs.go:194] generating shared ca certs ...
	I0929 12:06:44.386768  871091 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.386954  871091 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:06:44.387011  871091 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:06:44.387021  871091 certs.go:256] generating profile certs ...
	I0929 12:06:44.387100  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/client.key
	I0929 12:06:44.387155  871091 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key.eb5a4896
	I0929 12:06:44.387190  871091 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key
	I0929 12:06:44.387288  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:06:44.387320  871091 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:06:44.387329  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:06:44.387351  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:06:44.387373  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:06:44.387393  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:06:44.387440  871091 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:06:44.388149  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:06:44.419158  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:06:44.448205  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:06:44.482979  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:06:44.517557  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:06:44.549867  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 12:06:44.576134  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:06:44.604658  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/no-preload-306088/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 12:06:44.631756  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:06:44.658081  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:06:44.684187  871091 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:06:44.710650  871091 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:06:44.729717  871091 ssh_runner.go:195] Run: openssl version
	I0929 12:06:44.735824  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:06:44.745812  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749234  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.749293  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:06:44.756789  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:06:44.767948  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:06:44.778834  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782611  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.782681  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:06:44.790603  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:06:44.800010  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:06:44.810306  871091 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814380  871091 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.814509  871091 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:06:44.822959  871091 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:06:44.834110  871091 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:06:44.837912  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:06:44.844692  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:06:44.851275  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:06:44.858576  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:06:44.866396  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:06:44.875491  871091 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:06:44.883074  871091 kubeadm.go:392] StartCluster: {Name:no-preload-306088 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-306088 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:06:44.883211  871091 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:06:44.904790  871091 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:06:44.917300  871091 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:06:44.917322  871091 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:06:44.917374  871091 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:06:44.927571  871091 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:06:44.928675  871091 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-306088" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.929373  871091 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-306088" cluster setting kubeconfig missing "no-preload-306088" context setting]
	I0929 12:06:44.930612  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.932840  871091 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:06:44.943928  871091 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.94.2
	I0929 12:06:44.943969  871091 kubeadm.go:593] duration metric: took 26.639509ms to restartPrimaryControlPlane
	I0929 12:06:44.943982  871091 kubeadm.go:394] duration metric: took 60.918658ms to StartCluster
	I0929 12:06:44.944003  871091 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.944082  871091 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:06:44.946478  871091 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:06:44.946713  871091 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:06:44.946792  871091 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:06:44.946942  871091 addons.go:69] Setting storage-provisioner=true in profile "no-preload-306088"
	I0929 12:06:44.946950  871091 addons.go:69] Setting default-storageclass=true in profile "no-preload-306088"
	I0929 12:06:44.946967  871091 addons.go:238] Setting addon storage-provisioner=true in "no-preload-306088"
	I0929 12:06:44.946975  871091 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-306088"
	I0929 12:06:44.946990  871091 addons.go:69] Setting metrics-server=true in profile "no-preload-306088"
	I0929 12:06:44.947004  871091 config.go:182] Loaded profile config "no-preload-306088": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:06:44.947018  871091 addons.go:238] Setting addon metrics-server=true in "no-preload-306088"
	I0929 12:06:44.947007  871091 addons.go:69] Setting dashboard=true in profile "no-preload-306088"
	W0929 12:06:44.947027  871091 addons.go:247] addon metrics-server should already be in state true
	I0929 12:06:44.947041  871091 addons.go:238] Setting addon dashboard=true in "no-preload-306088"
	W0929 12:06:44.946976  871091 addons.go:247] addon storage-provisioner should already be in state true
	W0929 12:06:44.947052  871091 addons.go:247] addon dashboard should already be in state true
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947081  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947077  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.947415  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947557  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947574  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.947710  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.949123  871091 out.go:179] * Verifying Kubernetes components...
	I0929 12:06:44.951560  871091 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:06:44.983162  871091 addons.go:238] Setting addon default-storageclass=true in "no-preload-306088"
	W0929 12:06:44.983184  871091 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:06:44.983259  871091 host.go:66] Checking if "no-preload-306088" exists ...
	I0929 12:06:44.983409  871091 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:06:44.983471  871091 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:06:44.984010  871091 cli_runner.go:164] Run: docker container inspect no-preload-306088 --format={{.State.Status}}
	I0929 12:06:44.984739  871091 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:44.984759  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:06:44.984810  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.985006  871091 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:06:44.985094  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:06:44.985115  871091 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:06:44.985173  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:44.989553  871091 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:06:44.990700  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:06:44.990720  871091 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:06:44.990787  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.013082  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.023016  871091 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.023045  871091 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:06:45.023112  871091 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-306088
	I0929 12:06:45.023478  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.027093  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.046756  871091 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33523 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/no-preload-306088/id_rsa Username:docker}
	I0929 12:06:45.088649  871091 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:06:45.131986  871091 node_ready.go:35] waiting up to 6m0s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:45.142439  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.156825  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:06:45.156854  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:06:45.157091  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:06:45.157113  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:06:45.171641  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.191370  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:06:45.191407  871091 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:06:45.191600  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:06:45.191622  871091 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:06:45.225277  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:06:45.225316  871091 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:06:45.227138  871091 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.227166  871091 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	W0929 12:06:45.240720  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.240807  871091 retry.go:31] will retry after 255.439226ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.253570  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:06:45.253730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:06:45.253752  871091 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:06:45.256592  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.256642  871091 retry.go:31] will retry after 176.530584ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.284730  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:06:45.284766  871091 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:06:45.315598  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:06:45.315629  871091 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 12:06:45.337290  871091 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.337352  871091 retry.go:31] will retry after 216.448516ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:06:45.341267  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:06:45.341293  871091 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 12:06:45.367418  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:06:45.367447  871091 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:06:45.394525  871091 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.394579  871091 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:06:45.428230  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:06:45.433674  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:06:45.496374  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:06:45.554373  871091 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0929 12:06:42.757687  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:45.257903  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:47.043268  871091 node_ready.go:49] node "no-preload-306088" is "Ready"
	I0929 12:06:47.043313  871091 node_ready.go:38] duration metric: took 1.911288329s for node "no-preload-306088" to be "Ready" ...
	I0929 12:06:47.043336  871091 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:06:47.043393  871091 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:06:47.559973  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.131688912s)
	I0929 12:06:47.560210  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.126485829s)
	I0929 12:06:47.561634  871091 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-306088 addons enable metrics-server
	
	I0929 12:06:47.677198  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.180776144s)
	I0929 12:06:47.677264  871091 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.122845465s)
	I0929 12:06:47.677276  871091 api_server.go:72] duration metric: took 2.730527098s to wait for apiserver process to appear ...
	I0929 12:06:47.677284  871091 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:06:47.677301  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:47.677300  871091 addons.go:479] Verifying addon metrics-server=true in "no-preload-306088"
	I0929 12:06:47.679081  871091 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	W0929 12:06:44.905162  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:47.405106  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:06:47.680000  871091 addons.go:514] duration metric: took 2.733215653s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:06:47.681720  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:47.681742  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.178112  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.184346  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:06:48.184379  871091 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:06:48.678093  871091 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0929 12:06:48.683059  871091 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I0929 12:06:48.684122  871091 api_server.go:141] control plane version: v1.34.0
	I0929 12:06:48.684148  871091 api_server.go:131] duration metric: took 1.006856952s to wait for apiserver health ...
	I0929 12:06:48.684159  871091 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:06:48.686922  871091 system_pods.go:59] 8 kube-system pods found
	I0929 12:06:48.686951  871091 system_pods.go:61] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.686958  871091 system_pods.go:61] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.686972  871091 system_pods.go:61] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.686984  871091 system_pods.go:61] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.686999  871091 system_pods.go:61] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 12:06:48.687008  871091 system_pods.go:61] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.687018  871091 system_pods.go:61] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.687024  871091 system_pods.go:61] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.687035  871091 system_pods.go:74] duration metric: took 2.869523ms to wait for pod list to return data ...
	I0929 12:06:48.687047  871091 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:06:48.690705  871091 default_sa.go:45] found service account: "default"
	I0929 12:06:48.690730  871091 default_sa.go:55] duration metric: took 3.675534ms for default service account to be created ...
	I0929 12:06:48.690740  871091 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 12:06:48.693650  871091 system_pods.go:86] 8 kube-system pods found
	I0929 12:06:48.693684  871091 system_pods.go:89] "coredns-66bc5c9577-llrxw" [f71e219c-12ce-4d28-9e3b-3d63730eb151] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:06:48.693693  871091 system_pods.go:89] "etcd-no-preload-306088" [eebef832-c896-4f63-8d83-c1b6827179e9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:06:48.693715  871091 system_pods.go:89] "kube-apiserver-no-preload-306088" [1856b8b1-cc61-4f2c-b99d-67992966d9d8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:06:48.693725  871091 system_pods.go:89] "kube-controller-manager-no-preload-306088" [482a09d9-06df-4f0f-9d00-1e61f2917a2e] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:06:48.693733  871091 system_pods.go:89] "kube-proxy-79hf6" [98f1dd87-196e-4be2-9522-5e21eaef09a9] Running
	I0929 12:06:48.693738  871091 system_pods.go:89] "kube-scheduler-no-preload-306088" [c40ea090-59be-4bd0-8915-49d85a17518b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:06:48.693743  871091 system_pods.go:89] "metrics-server-746fcd58dc-cbm6p" [e65b594e-5e46-445b-8dc4-ff9d686cdc94] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:06:48.693753  871091 system_pods.go:89] "storage-provisioner" [2f7729f1-fde4-435e-ba38-42b755fb9e32] Running
	I0929 12:06:48.693770  871091 system_pods.go:126] duration metric: took 3.022951ms to wait for k8s-apps to be running ...
	I0929 12:06:48.693778  871091 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 12:06:48.693838  871091 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 12:06:48.706595  871091 system_svc.go:56] duration metric: took 12.805298ms WaitForService to wait for kubelet
	I0929 12:06:48.706622  871091 kubeadm.go:578] duration metric: took 3.759872419s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 12:06:48.706643  871091 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:06:48.709282  871091 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:06:48.709305  871091 node_conditions.go:123] node cpu capacity is 8
	I0929 12:06:48.709317  871091 node_conditions.go:105] duration metric: took 2.669783ms to run NodePressure ...
	I0929 12:06:48.709327  871091 start.go:241] waiting for startup goroutines ...
	I0929 12:06:48.709334  871091 start.go:246] waiting for cluster config update ...
	I0929 12:06:48.709345  871091 start.go:255] writing updated cluster config ...
	I0929 12:06:48.709631  871091 ssh_runner.go:195] Run: rm -f paused
	I0929 12:06:48.713435  871091 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:48.716857  871091 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 12:06:50.722059  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:47.756924  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.757051  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:49.903749  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:51.904179  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:52.722481  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:55.222976  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:52.257245  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	W0929 12:06:54.757176  861376 pod_ready.go:104] pod "coredns-66bc5c9577-zqqdn" is not "Ready", error: <nil>
	I0929 12:06:56.756246  861376 pod_ready.go:94] pod "coredns-66bc5c9577-zqqdn" is "Ready"
	I0929 12:06:56.756280  861376 pod_ready.go:86] duration metric: took 38.005267391s for pod "coredns-66bc5c9577-zqqdn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.761541  861376 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.765343  861376 pod_ready.go:94] pod "etcd-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.765363  861376 pod_ready.go:86] duration metric: took 3.798035ms for pod "etcd-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.767218  861376 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.770588  861376 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.770606  861376 pod_ready.go:86] duration metric: took 3.370627ms for pod "kube-apiserver-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.772342  861376 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:56.955016  861376 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:56.955044  861376 pod_ready.go:86] duration metric: took 182.679374ms for pod "kube-controller-manager-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.155127  861376 pod_ready.go:83] waiting for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.555193  861376 pod_ready.go:94] pod "kube-proxy-bspjk" is "Ready"
	I0929 12:06:57.555220  861376 pod_ready.go:86] duration metric: took 400.064967ms for pod "kube-proxy-bspjk" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:57.755450  861376 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155379  861376 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-414542" is "Ready"
	I0929 12:06:58.155405  861376 pod_ready.go:86] duration metric: took 399.927452ms for pod "kube-scheduler-default-k8s-diff-port-414542" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:06:58.155417  861376 pod_ready.go:40] duration metric: took 39.40795228s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:06:58.201296  861376 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:06:58.203132  861376 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-414542" cluster and "default" namespace by default
	W0929 12:06:53.904220  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:56.404228  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:06:57.722276  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:00.222038  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:06:58.904138  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:00.904689  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:03.404607  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:02.722573  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.222722  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:05.903327  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.903942  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	W0929 12:07:07.722224  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.722687  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:09.904282  866509 pod_ready.go:104] pod "coredns-66bc5c9577-h49hh" is not "Ready", error: <nil>
	I0929 12:07:10.403750  866509 pod_ready.go:94] pod "coredns-66bc5c9577-h49hh" is "Ready"
	I0929 12:07:10.403779  866509 pod_ready.go:86] duration metric: took 34.505404913s for pod "coredns-66bc5c9577-h49hh" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.406142  866509 pod_ready.go:83] waiting for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.409848  866509 pod_ready.go:94] pod "etcd-embed-certs-031687" is "Ready"
	I0929 12:07:10.409884  866509 pod_ready.go:86] duration metric: took 3.705005ms for pod "etcd-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.411799  866509 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.415853  866509 pod_ready.go:94] pod "kube-apiserver-embed-certs-031687" is "Ready"
	I0929 12:07:10.415901  866509 pod_ready.go:86] duration metric: took 4.068426ms for pod "kube-apiserver-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.417734  866509 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.601598  866509 pod_ready.go:94] pod "kube-controller-manager-embed-certs-031687" is "Ready"
	I0929 12:07:10.601629  866509 pod_ready.go:86] duration metric: took 183.870372ms for pod "kube-controller-manager-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:10.801642  866509 pod_ready.go:83] waiting for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.201791  866509 pod_ready.go:94] pod "kube-proxy-8lx97" is "Ready"
	I0929 12:07:11.201815  866509 pod_ready.go:86] duration metric: took 400.146465ms for pod "kube-proxy-8lx97" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.402190  866509 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802461  866509 pod_ready.go:94] pod "kube-scheduler-embed-certs-031687" is "Ready"
	I0929 12:07:11.802499  866509 pod_ready.go:86] duration metric: took 400.277946ms for pod "kube-scheduler-embed-certs-031687" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:11.802515  866509 pod_ready.go:40] duration metric: took 35.908487233s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:11.853382  866509 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:11.856798  866509 out.go:179] * Done! kubectl is now configured to use "embed-certs-031687" cluster and "default" namespace by default
	W0929 12:07:12.221602  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:14.221842  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:16.222454  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:18.722820  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	W0929 12:07:20.725000  871091 pod_ready.go:104] pod "coredns-66bc5c9577-llrxw" is not "Ready", error: <nil>
	I0929 12:07:21.222494  871091 pod_ready.go:94] pod "coredns-66bc5c9577-llrxw" is "Ready"
	I0929 12:07:21.222527  871091 pod_ready.go:86] duration metric: took 32.505636564s for pod "coredns-66bc5c9577-llrxw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.225025  871091 pod_ready.go:83] waiting for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.228512  871091 pod_ready.go:94] pod "etcd-no-preload-306088" is "Ready"
	I0929 12:07:21.228529  871091 pod_ready.go:86] duration metric: took 3.482765ms for pod "etcd-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.230262  871091 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.233598  871091 pod_ready.go:94] pod "kube-apiserver-no-preload-306088" is "Ready"
	I0929 12:07:21.233622  871091 pod_ready.go:86] duration metric: took 3.343035ms for pod "kube-apiserver-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.235393  871091 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.421017  871091 pod_ready.go:94] pod "kube-controller-manager-no-preload-306088" is "Ready"
	I0929 12:07:21.421047  871091 pod_ready.go:86] duration metric: took 185.636666ms for pod "kube-controller-manager-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:21.621421  871091 pod_ready.go:83] waiting for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.020579  871091 pod_ready.go:94] pod "kube-proxy-79hf6" is "Ready"
	I0929 12:07:22.020611  871091 pod_ready.go:86] duration metric: took 399.163924ms for pod "kube-proxy-79hf6" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.220586  871091 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620444  871091 pod_ready.go:94] pod "kube-scheduler-no-preload-306088" is "Ready"
	I0929 12:07:22.620469  871091 pod_ready.go:86] duration metric: took 399.857006ms for pod "kube-scheduler-no-preload-306088" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 12:07:22.620481  871091 pod_ready.go:40] duration metric: took 33.907023232s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 12:07:22.667955  871091 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:07:22.669694  871091 out.go:179] * Done! kubectl is now configured to use "no-preload-306088" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 12:11:15 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:15.639483465Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:11:15 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:15.641446600Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:11:15 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:15.641476269Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:11:20 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:20.523109595Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:11:20 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:11:20.559671206Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:16:24 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:24.564841748Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:16:24 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:24.611090080Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:16:24 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:24.611210118Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:16:24 old-k8s-version-858855 cri-dockerd[1109]: time="2025-09-29T12:16:24Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:16:25 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:25.577467399Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:16:25 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:25.577512942Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:16:25 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:25.579781051Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:16:25 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:25.579821733Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:16:28 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:28.523400263Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:16:28 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:16:28.556412627Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:21:27 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:27.997424150Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:21:27 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:27.997477934Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:21:27 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:27.999703889Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:21:27 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:27.999750638Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.103.1:53: no such host"
	Sep 29 12:21:32 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:32.566314636Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:21:32 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:32.613454931Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:21:32 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:32.613546371Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:21:32 old-k8s-version-858855 cri-dockerd[1109]: time="2025-09-29T12:21:32Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:21:38 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:38.526728082Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:21:38 old-k8s-version-858855 dockerd[801]: time="2025-09-29T12:21:38.560537304Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	942edd51c699f       6e38f40d628db                                                                                         17 minutes ago      Running             storage-provisioner       2                   07bffe4f8ab31       storage-provisioner
	40d1936f7a182       ead0a4a53df89                                                                                         18 minutes ago      Running             coredns                   1                   475c4fa557701       coredns-5dd5756b68-xbvjd
	c1e6b6259f0e6       56cc512116c8f                                                                                         18 minutes ago      Running             busybox                   1                   139c0966fc4c3       busybox
	8924ee529df34       6e38f40d628db                                                                                         18 minutes ago      Exited              storage-provisioner       1                   07bffe4f8ab31       storage-provisioner
	22ba39d2ae5a3       ea1030da44aa1                                                                                         18 minutes ago      Running             kube-proxy                1                   3ddb6636f3ce5       kube-proxy-9w9zt
	ee084712c1b8e       4be79c38a4bab                                                                                         18 minutes ago      Running             kube-controller-manager   1                   c15364d04af73       kube-controller-manager-old-k8s-version-858855
	e1abbb3530f23       73deb9a3f7025                                                                                         18 minutes ago      Running             etcd                      1                   8d8b7b4c01209       etcd-old-k8s-version-858855
	566c90e1275a8       bb5e0dde9054c                                                                                         18 minutes ago      Running             kube-apiserver            1                   7e7ee9522cbcb       kube-apiserver-old-k8s-version-858855
	f621e5a4db271       f6f496300a2ae                                                                                         18 minutes ago      Running             kube-scheduler            1                   238b013375b50       kube-scheduler-old-k8s-version-858855
	72d289f470fa3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              busybox                   0                   785221f4de24e       busybox
	ca05612b0c1a1       ead0a4a53df89                                                                                         19 minutes ago      Exited              coredns                   0                   af4f4f5a90e27       coredns-5dd5756b68-xbvjd
	d3482105a1e11       ea1030da44aa1                                                                                         19 minutes ago      Exited              kube-proxy                0                   52f0f8d9723f0       kube-proxy-9w9zt
	f16a413904c89       bb5e0dde9054c                                                                                         19 minutes ago      Exited              kube-apiserver            0                   c690998fe1b7f       kube-apiserver-old-k8s-version-858855
	d89f29914e486       73deb9a3f7025                                                                                         19 minutes ago      Exited              etcd                      0                   e464438e3531d       etcd-old-k8s-version-858855
	b657e8edad2ba       4be79c38a4bab                                                                                         19 minutes ago      Exited              kube-controller-manager   0                   3f6869d6bebc9       kube-controller-manager-old-k8s-version-858855
	7ec694630b5d1       f6f496300a2ae                                                                                         19 minutes ago      Exited              kube-scheduler            0                   64f650385a37c       kube-scheduler-old-k8s-version-858855
	
	
	==> coredns [40d1936f7a18] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:49446 - 6398 "HINFO IN 2432455842848361899.6694524293727266407. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016906895s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [ca05612b0c1a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-858855
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-858855
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=old-k8s-version-858855
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_04_30_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:04:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-858855
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:23:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:20:44 +0000   Mon, 29 Sep 2025 12:04:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:20:44 +0000   Mon, 29 Sep 2025 12:04:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:20:44 +0000   Mon, 29 Sep 2025 12:04:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:20:44 +0000   Mon, 29 Sep 2025 12:04:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-858855
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 25c4315876594b7ebc42d99e6e882c81
	  System UUID:                0d302006-e090-41d5-9094-71b88b7d0779
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-5dd5756b68-xbvjd                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-old-k8s-version-858855                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kube-apiserver-old-k8s-version-858855             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-858855    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-9w9zt                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-858855             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-cqfgh                   100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-dkknq        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-schbp             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-858855 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-858855 event: Registered Node old-k8s-version-858855 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x9 over 18m)  kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x7 over 18m)  kubelet          Node old-k8s-version-858855 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node old-k8s-version-858855 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-858855 event: Registered Node old-k8s-version-858855 in Controller
	
	
	==> dmesg <==
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7e ea 9d d2 75 10 08 06
	[  +0.000345] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000032] ll header: 00000000: ff ff ff ff ff ff 02 ed 9c 9f 01 b3 08 06
	[  +7.676274] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	
	
	==> etcd [d89f29914e48] <==
	{"level":"info","ts":"2025-09-29T12:04:26.215031Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.215047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.215059Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.21507Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-29T12:04:26.21617Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-858855 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T12:04:26.216205Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:04:26.216258Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:04:26.216295Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.217204Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T12:04:26.217227Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T12:04:26.217926Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.218088Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.218119Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T12:04:26.218861Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T12:04:26.219415Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-09-29T12:04:58.212733Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:04:58.212821Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-858855","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	{"level":"warn","ts":"2025-09-29T12:04:58.212955Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:04:58.215977Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:04:58.257329Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:04:58.257394Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.103.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:04:58.259441Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f23060b075c4c089","current-leader-member-id":"f23060b075c4c089"}
	{"level":"info","ts":"2025-09-29T12:04:58.261427Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:04:58.261548Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:04:58.261561Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-858855","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"]}
	
	
	==> etcd [e1abbb3530f2] <==
	{"level":"info","ts":"2025-09-29T12:05:22.90111Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T12:05:22.90135Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:05:22.901806Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-09-29T12:05:22.901385Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T12:05:22.901406Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T12:05:24.586782Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T12:05:24.586834Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T12:05:24.586893Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-09-29T12:05:24.586915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.586924Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.586933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.586949Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-09-29T12:05:24.588481Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-858855 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T12:05:24.588515Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:05:24.588509Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T12:05:24.588727Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T12:05:24.588769Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T12:05:24.589742Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T12:05:24.589755Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-09-29T12:15:24.612119Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":908}
	{"level":"info","ts":"2025-09-29T12:15:24.613768Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":908,"took":"1.35776ms","hash":985401804}
	{"level":"info","ts":"2025-09-29T12:15:24.613808Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":985401804,"revision":908,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T12:20:24.617652Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1158}
	{"level":"info","ts":"2025-09-29T12:20:24.619124Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1158,"took":"1.162755ms","hash":1634059893}
	{"level":"info","ts":"2025-09-29T12:20:24.61917Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1634059893,"revision":1158,"compact-revision":908}
	
	
	==> kernel <==
	 12:24:05 up  2:06,  0 users,  load average: 0.63, 0.71, 1.57
	Linux old-k8s-version-858855 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [566c90e1275a] <==
	I0929 12:20:26.732299       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:20:26.732298       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:20:26.732392       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 12:20:26.733340       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:21:25.651133       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.91.25:443: connect: connection refused
	I0929 12:21:25.651161       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 12:21:26.733454       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:21:26.733491       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 12:21:26.733498       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:21:26.733455       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:21:26.733576       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 12:21:26.734567       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:22:25.651792       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.91.25:443: connect: connection refused
	I0929 12:22:25.651813       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0929 12:23:25.651340       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.104.91.25:443: connect: connection refused
	I0929 12:23:25.651361       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0929 12:23:26.734617       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:23:26.734654       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 12:23:26.734661       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:23:26.734726       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 12:23:26.734802       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 12:23:26.735907       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [f16a413904c8] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:05:08.005616       1 logging.go:59] [core] [Channel #145 SubChannel #146] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:05:08.095162       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:05:08.125460       1 logging.go:59] [core] [Channel #148 SubChannel #149] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b657e8edad2b] <==
	I0929 12:04:43.417902       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-9w9zt"
	I0929 12:04:43.500870       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-x2c2w"
	I0929 12:04:43.515049       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-xbvjd"
	I0929 12:04:43.556340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="409.708028ms"
	I0929 12:04:43.576313       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="19.904674ms"
	I0929 12:04:43.576679       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.403µs"
	I0929 12:04:43.579390       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="107.737µs"
	I0929 12:04:43.604294       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="174.21µs"
	I0929 12:04:44.452823       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0929 12:04:44.468181       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-x2c2w"
	I0929 12:04:44.480092       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="27.29141ms"
	I0929 12:04:44.490584       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="10.424546ms"
	I0929 12:04:44.490697       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.806µs"
	I0929 12:04:45.045836       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="144.847µs"
	I0929 12:04:45.073966       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.289563ms"
	I0929 12:04:45.074158       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="73.243µs"
	I0929 12:04:50.291943       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="117.528µs"
	I0929 12:04:51.152147       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="615.199µs"
	I0929 12:04:51.167916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="161.104µs"
	I0929 12:04:51.172546       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="98.954µs"
	I0929 12:04:57.530997       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0929 12:04:57.545034       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-cqfgh"
	I0929 12:04:57.560078       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="28.908098ms"
	I0929 12:04:57.605438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="45.153823ms"
	I0929 12:04:57.606099       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="293.934µs"
	
	
	==> kube-controller-manager [ee084712c1b8] <==
	I0929 12:19:08.856800       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:19:38.373749       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:19:38.864176       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:20:08.379309       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:20:08.871699       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:20:38.384216       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:20:38.878971       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:21:08.388234       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:21:08.885457       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:21:38.392909       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:21:38.893342       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 12:21:43.514268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="125.661µs"
	I0929 12:21:47.514563       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="123.515µs"
	I0929 12:21:53.513291       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="134.142µs"
	I0929 12:21:56.513434       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="104.899µs"
	I0929 12:22:02.512818       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="137.425µs"
	I0929 12:22:07.514347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="89.965µs"
	E0929 12:22:08.397256       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:22:08.901084       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:22:38.402749       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:22:38.909397       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:23:08.407310       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:23:08.917261       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 12:23:38.412293       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 12:23:38.923833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [22ba39d2ae5a] <==
	I0929 12:05:27.342042       1 server_others.go:69] "Using iptables proxy"
	I0929 12:05:27.356586       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0929 12:05:27.385742       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:27.389194       1 server_others.go:152] "Using iptables Proxier"
	I0929 12:05:27.389241       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 12:05:27.389253       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 12:05:27.389320       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 12:05:27.389694       1 server.go:846] "Version info" version="v1.28.0"
	I0929 12:05:27.389718       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:27.392202       1 config.go:97] "Starting endpoint slice config controller"
	I0929 12:05:27.392241       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 12:05:27.392381       1 config.go:315] "Starting node config controller"
	I0929 12:05:27.392402       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 12:05:27.392954       1 config.go:188] "Starting service config controller"
	I0929 12:05:27.393018       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 12:05:27.493045       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 12:05:27.493131       1 shared_informer.go:318] Caches are synced for node config
	I0929 12:05:27.494351       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [d3482105a1e1] <==
	I0929 12:04:44.338727       1 server_others.go:69] "Using iptables proxy"
	I0929 12:04:44.358566       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I0929 12:04:44.409997       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:04:44.418531       1 server_others.go:152] "Using iptables Proxier"
	I0929 12:04:44.418581       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 12:04:44.418591       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 12:04:44.418622       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 12:04:44.419106       1 server.go:846] "Version info" version="v1.28.0"
	I0929 12:04:44.419124       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:04:44.420999       1 config.go:188] "Starting service config controller"
	I0929 12:04:44.423323       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 12:04:44.421631       1 config.go:97] "Starting endpoint slice config controller"
	I0929 12:04:44.423863       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 12:04:44.422476       1 config.go:315] "Starting node config controller"
	I0929 12:04:44.424551       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 12:04:44.524998       1 shared_informer.go:318] Caches are synced for service config
	I0929 12:04:44.527171       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 12:04:44.527656       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [7ec694630b5d] <==
	W0929 12:04:27.671446       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 12:04:27.672437       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0929 12:04:27.671522       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 12:04:27.672464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 12:04:27.672043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0929 12:04:27.672559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0929 12:04:28.489560       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0929 12:04:28.489601       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0929 12:04:28.632661       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0929 12:04:28.632706       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0929 12:04:28.652576       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0929 12:04:28.652619       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0929 12:04:28.691939       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0929 12:04:28.691976       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0929 12:04:28.713737       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 12:04:28.713778       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0929 12:04:28.787145       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0929 12:04:28.787174       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0929 12:04:28.888146       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 12:04:28.888189       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0929 12:04:29.141219       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0929 12:04:29.141260       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0929 12:04:32.264978       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 12:04:58.226157       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	E0929 12:04:58.226292       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [f621e5a4db27] <==
	I0929 12:05:23.602302       1 serving.go:348] Generated self-signed cert in-memory
	W0929 12:05:25.725035       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:05:25.725100       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system": RBAC: [clusterrole.rbac.authorization.k8s.io "system:kube-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:volume-scheduler" not found, clusterrole.rbac.authorization.k8s.io "system:basic-user" not found, clusterrole.rbac.authorization.k8s.io "system:discovery" not found, clusterrole.rbac.authorization.k8s.io "system:public-info-viewer" not found]
	W0929 12:05:25.725117       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:05:25.725128       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:05:25.760715       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 12:05:25.762930       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:25.766773       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:05:25.767313       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 12:05:25.767818       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 12:05:25.767906       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0929 12:05:25.868184       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 29 12:22:23 old-k8s-version-858855 kubelet[1330]: E0929 12:22:23.504655    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:22:26 old-k8s-version-858855 kubelet[1330]: E0929 12:22:26.504011    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:22:33 old-k8s-version-858855 kubelet[1330]: E0929 12:22:33.504247    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:22:36 old-k8s-version-858855 kubelet[1330]: E0929 12:22:36.504134    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:22:38 old-k8s-version-858855 kubelet[1330]: E0929 12:22:38.504429    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:22:47 old-k8s-version-858855 kubelet[1330]: E0929 12:22:47.504629    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:22:48 old-k8s-version-858855 kubelet[1330]: E0929 12:22:48.504605    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:22:52 old-k8s-version-858855 kubelet[1330]: E0929 12:22:52.504719    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:22:59 old-k8s-version-858855 kubelet[1330]: E0929 12:22:59.504936    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:23:03 old-k8s-version-858855 kubelet[1330]: E0929 12:23:03.504755    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:23:07 old-k8s-version-858855 kubelet[1330]: E0929 12:23:07.504341    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:23:14 old-k8s-version-858855 kubelet[1330]: E0929 12:23:14.504407    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:23:16 old-k8s-version-858855 kubelet[1330]: E0929 12:23:16.504112    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:23:18 old-k8s-version-858855 kubelet[1330]: E0929 12:23:18.504760    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:23:29 old-k8s-version-858855 kubelet[1330]: E0929 12:23:29.504753    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:23:29 old-k8s-version-858855 kubelet[1330]: E0929 12:23:29.504765    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:23:30 old-k8s-version-858855 kubelet[1330]: E0929 12:23:30.503734    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:23:41 old-k8s-version-858855 kubelet[1330]: E0929 12:23:41.504124    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:23:42 old-k8s-version-858855 kubelet[1330]: E0929 12:23:42.503648    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:23:45 old-k8s-version-858855 kubelet[1330]: E0929 12:23:45.504530    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:23:53 old-k8s-version-858855 kubelet[1330]: E0929 12:23:53.504259    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:23:53 old-k8s-version-858855 kubelet[1330]: E0929 12:23:53.504323    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	Sep 29 12:23:57 old-k8s-version-858855 kubelet[1330]: E0929 12:23:57.504473    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-cqfgh" podUID="d4d011e5-13ca-450c-a245-643d5ee1352c"
	Sep 29 12:24:04 old-k8s-version-858855 kubelet[1330]: E0929 12:24:04.505505    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-dkknq" podUID="56bd0680-8802-4b02-85dd-0e11df6f1e9d"
	Sep 29 12:24:04 old-k8s-version-858855 kubelet[1330]: E0929 12:24:04.506057    1330 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-schbp" podUID="71e083e1-076b-456d-a95a-397cfbfe8d83"
	
	
	==> storage-provisioner [8924ee529df3] <==
	I0929 12:05:27.291709       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:05:57.296307       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [942edd51c699] <==
	I0929 12:06:10.611405       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 12:06:10.619390       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 12:06:10.619479       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 12:06:28.017694       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 12:06:28.017829       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f2b06835-e235-47b6-8894-f950d4aafc39", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-858855_400a105f-e149-4c4a-9a60-7ce30b0d787c became leader
	I0929 12:06:28.017899       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-858855_400a105f-e149-4c4a-9a60-7ce30b0d787c!
	I0929 12:06:28.118164       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-858855_400a105f-e149-4c4a-9a60-7ce30b0d787c!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858855 -n old-k8s-version-858855
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-858855 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-858855 describe pod metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-858855 describe pod metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp: exit status 1 (68.063631ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-cqfgh" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-dkknq" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-schbp" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-858855 describe pod metrics-server-57f55c9bc5-cqfgh dashboard-metrics-scraper-5f989dc9cf-dkknq kubernetes-dashboard-8694d4445c-schbp: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.53s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-cxjff" [3e3d7969-3840-4382-aed3-5a0078b5c059] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:16:11.354706  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:25:01.280790919 +0000 UTC m=+4383.324603016
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 describe po kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-414542 describe po kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-cxjff
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-414542/192.168.85.2
Start Time:       Mon, 29 Sep 2025 12:06:21 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f88dc (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-f88dc:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff to default-k8s-diff-port-414542
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m29s (x67 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m29s (x67 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 logs kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-414542 logs kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard: exit status 1 (99.365134ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-cxjff" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-414542 logs kubernetes-dashboard-855c9754f9-cxjff -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-414542
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-414542:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b",
	        "Created": "2025-09-29T12:05:11.098346797Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 861575,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:06:07.105136284Z",
	            "FinishedAt": "2025-09-29T12:06:06.313379601Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/hostname",
	        "HostsPath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/hosts",
	        "LogPath": "/var/lib/docker/containers/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b/3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b-json.log",
	        "Name": "/default-k8s-diff-port-414542",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-414542:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-414542",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "3994f7f7ffb72c898e1e8af564468514c1e8b71726987d7f4a2657a81093f27b",
	                "LowerDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf/merged",
	                "UpperDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf/diff",
	                "WorkDir": "/var/lib/docker/overlay2/7d4cf8a861859f395da8695352afe0ccdae1678a37db531007e8d0e65b5d5acf/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-414542",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-414542/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-414542",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-414542",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-414542",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "30d4404fe9a53d936209a977607824804f2d5865ab2131bb99a438428657a9ef",
	            "SandboxKey": "/var/run/docker/netns/30d4404fe9a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33517"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-414542": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "aa:67:52:2b:51:cd",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "910e288d2f8f50abd1ba56a42ed95d1cdfe96eec6c96b70b9353f7a3dcc003fa",
	                    "EndpointID": "21d0a1f2de01524b0bd3ec6cee0d257c171801b2904b6278c3997f51c27d6f83",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-414542",
	                        "3994f7f7ffb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
E0929 12:25:01.828504  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414542 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-414542 logs -n 25: (1.265250986s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-858855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ start   │ -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-414542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:05 UTC │
	│ stop    │ -p default-k8s-diff-port-414542 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:05 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-414542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable metrics-server -p embed-certs-031687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p embed-certs-031687 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p embed-certs-031687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p no-preload-306088 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ image   │ old-k8s-version-858855 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ pause   │ -p old-k8s-version-858855 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ unpause │ -p old-k8s-version-858855 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ delete  │ -p old-k8s-version-858855                                                                                                                                                                                                                       │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ delete  │ -p old-k8s-version-858855                                                                                                                                                                                                                       │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ start   │ -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-979136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ stop    │ -p newest-cni-979136 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-979136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ start   │ -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:24:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:24:51.027836  905649 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:24:51.028162  905649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.028175  905649 out.go:374] Setting ErrFile to fd 2...
	I0929 12:24:51.028179  905649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.028374  905649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:24:51.029337  905649 out.go:368] Setting JSON to false
	I0929 12:24:51.030825  905649 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7635,"bootTime":1759141056,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:24:51.030968  905649 start.go:140] virtualization: kvm guest
	I0929 12:24:51.032783  905649 out.go:179] * [newest-cni-979136] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:24:51.034019  905649 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:24:51.034055  905649 notify.go:220] Checking for updates...
	I0929 12:24:51.036459  905649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:24:51.037859  905649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:51.039082  905649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:24:51.040311  905649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:24:51.041587  905649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:24:51.043195  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:51.043728  905649 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:24:51.068175  905649 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:24:51.068255  905649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:51.123146  905649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:24:51.112794792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:51.123257  905649 docker.go:318] overlay module found
	I0929 12:24:51.125091  905649 out.go:179] * Using the docker driver based on existing profile
	I0929 12:24:51.126326  905649 start.go:304] selected driver: docker
	I0929 12:24:51.126339  905649 start.go:924] validating driver "docker" against &{Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:51.126430  905649 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:24:51.127121  905649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:51.186671  905649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:24:51.176838416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:51.187052  905649 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 12:24:51.187093  905649 cni.go:84] Creating CNI manager for ""
	I0929 12:24:51.187164  905649 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:24:51.187225  905649 start.go:348] cluster config:
	{Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:51.189168  905649 out.go:179] * Starting "newest-cni-979136" primary control-plane node in "newest-cni-979136" cluster
	I0929 12:24:51.190349  905649 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:24:51.192465  905649 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:24:51.193503  905649 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:24:51.193547  905649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:24:51.193547  905649 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 12:24:51.193585  905649 cache.go:58] Caching tarball of preloaded images
	I0929 12:24:51.193693  905649 preload.go:172] Found /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 12:24:51.193704  905649 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 12:24:51.193824  905649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/config.json ...
	I0929 12:24:51.214508  905649 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:24:51.214530  905649 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:24:51.214551  905649 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:24:51.214581  905649 start.go:360] acquireMachinesLock for newest-cni-979136: {Name:mkc9e89421b142ce40f5cb759383c5450ffdf976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:24:51.214640  905649 start.go:364] duration metric: took 37.274µs to acquireMachinesLock for "newest-cni-979136"
	I0929 12:24:51.214660  905649 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:24:51.214665  905649 fix.go:54] fixHost starting: 
	I0929 12:24:51.214885  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:51.232065  905649 fix.go:112] recreateIfNeeded on newest-cni-979136: state=Stopped err=<nil>
	W0929 12:24:51.232092  905649 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 12:24:51.234018  905649 out.go:252] * Restarting existing docker container for "newest-cni-979136" ...
	I0929 12:24:51.234081  905649 cli_runner.go:164] Run: docker start newest-cni-979136
	I0929 12:24:51.475044  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:51.494168  905649 kic.go:430] container "newest-cni-979136" state is running.
	I0929 12:24:51.494681  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:51.514623  905649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/config.json ...
	I0929 12:24:51.514852  905649 machine.go:93] provisionDockerMachine start ...
	I0929 12:24:51.514945  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:51.533238  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:51.533491  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:51.533504  905649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:24:51.534277  905649 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55270->127.0.0.1:33533: read: connection reset by peer
	I0929 12:24:54.676970  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-979136
	
	I0929 12:24:54.677005  905649 ubuntu.go:182] provisioning hostname "newest-cni-979136"
	I0929 12:24:54.677081  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:54.695975  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:54.696244  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:54.696263  905649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-979136 && echo "newest-cni-979136" | sudo tee /etc/hostname
	I0929 12:24:54.848177  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-979136
	
	I0929 12:24:54.848263  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:54.868568  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:54.868809  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:54.868828  905649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-979136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-979136/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-979136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:24:55.006440  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:24:55.006486  905649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:24:55.006506  905649 ubuntu.go:190] setting up certificates
	I0929 12:24:55.006518  905649 provision.go:84] configureAuth start
	I0929 12:24:55.006580  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:55.025054  905649 provision.go:143] copyHostCerts
	I0929 12:24:55.025121  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:24:55.025140  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:24:55.025215  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:24:55.025317  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:24:55.025326  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:24:55.025353  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:24:55.025420  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:24:55.025427  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:24:55.025450  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:24:55.025513  905649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.newest-cni-979136 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-979136]
	I0929 12:24:55.243153  905649 provision.go:177] copyRemoteCerts
	I0929 12:24:55.243218  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:24:55.243264  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.263249  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:55.364291  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:24:55.389609  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:24:55.415500  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:24:55.440533  905649 provision.go:87] duration metric: took 434.000782ms to configureAuth
	I0929 12:24:55.440563  905649 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:24:55.440758  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:55.440818  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.460318  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.460729  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.460755  905649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:24:55.597583  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:24:55.597610  905649 ubuntu.go:71] root file system type: overlay
	I0929 12:24:55.597747  905649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:24:55.597807  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.619201  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.619420  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.619486  905649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:24:55.771605  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:24:55.771704  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.790052  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.790282  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.790300  905649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:24:55.932301  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:24:55.932334  905649 machine.go:96] duration metric: took 4.417466701s to provisionDockerMachine
	I0929 12:24:55.932351  905649 start.go:293] postStartSetup for "newest-cni-979136" (driver="docker")
	I0929 12:24:55.932365  905649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:24:55.932465  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:24:55.932550  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.954244  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.052000  905649 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:24:56.055711  905649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:24:56.055754  905649 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:24:56.055765  905649 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:24:56.055774  905649 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:24:56.055787  905649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:24:56.055831  905649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:24:56.055972  905649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:24:56.056075  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:24:56.065385  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:24:56.090181  905649 start.go:296] duration metric: took 157.792312ms for postStartSetup
	I0929 12:24:56.090268  905649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:24:56.090315  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.109744  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.202986  905649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:24:56.207666  905649 fix.go:56] duration metric: took 4.992992519s for fixHost
	I0929 12:24:56.207696  905649 start.go:83] releasing machines lock for "newest-cni-979136", held for 4.993042953s
	I0929 12:24:56.207761  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:56.225816  905649 ssh_runner.go:195] Run: cat /version.json
	I0929 12:24:56.225856  905649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:24:56.225890  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.225953  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.243859  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.245388  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.410148  905649 ssh_runner.go:195] Run: systemctl --version
	I0929 12:24:56.415184  905649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:24:56.419735  905649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:24:56.439126  905649 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:24:56.439194  905649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:24:56.448391  905649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:24:56.448426  905649 start.go:495] detecting cgroup driver to use...
	I0929 12:24:56.448461  905649 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:24:56.448625  905649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:24:56.465656  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:24:56.476251  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:24:56.486622  905649 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:24:56.486697  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:24:56.497049  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:56.507303  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:24:56.517167  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:56.527790  905649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:24:56.537523  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:24:56.548028  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:24:56.558377  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:24:56.568281  905649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:24:56.577443  905649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:24:56.586851  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:56.660866  905649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:24:56.741769  905649 start.go:495] detecting cgroup driver to use...
	I0929 12:24:56.741823  905649 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:24:56.741899  905649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:24:56.755292  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:24:56.767224  905649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:24:56.786855  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:24:56.799497  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:24:56.811529  905649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:24:56.829453  905649 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:24:56.833521  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:24:56.842646  905649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:24:56.860977  905649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:24:56.931377  905649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:24:57.001000  905649 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:24:57.001140  905649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:24:57.020740  905649 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:24:57.032094  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:57.102971  905649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:24:57.943232  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:24:57.958776  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:24:57.970760  905649 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:24:57.983315  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:24:57.994666  905649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:24:58.061628  905649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:24:58.131372  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.196002  905649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:24:58.216042  905649 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:24:58.227496  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.296813  905649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:24:58.382030  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:24:58.396219  905649 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:24:58.396294  905649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:24:58.400678  905649 start.go:563] Will wait 60s for crictl version
	I0929 12:24:58.400758  905649 ssh_runner.go:195] Run: which crictl
	I0929 12:24:58.404435  905649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:24:58.440974  905649 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:24:58.441049  905649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:24:58.466313  905649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:24:58.495007  905649 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:24:58.495109  905649 cli_runner.go:164] Run: docker network inspect newest-cni-979136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:24:58.513187  905649 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0929 12:24:58.517404  905649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:24:58.531305  905649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0929 12:24:58.532547  905649 kubeadm.go:875] updating cluster {Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:24:58.532682  905649 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:24:58.532746  905649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:24:58.553550  905649 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:24:58.553578  905649 docker.go:621] Images already preloaded, skipping extraction
	I0929 12:24:58.553660  905649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:24:58.574817  905649 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:24:58.574850  905649 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:24:58.574864  905649 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 docker true true} ...
	I0929 12:24:58.575035  905649 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-979136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:24:58.575101  905649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:24:58.629742  905649 cni.go:84] Creating CNI manager for ""
	I0929 12:24:58.629778  905649 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:24:58.629793  905649 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0929 12:24:58.629820  905649 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-979136 NodeName:newest-cni-979136 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:24:58.630059  905649 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-979136"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:24:58.630139  905649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:24:58.640481  905649 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:24:58.640539  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:24:58.650199  905649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0929 12:24:58.670388  905649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:24:58.690755  905649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0929 12:24:58.710213  905649 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:24:58.714041  905649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:24:58.726275  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.797764  905649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:24:58.820648  905649 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136 for IP: 192.168.103.2
	I0929 12:24:58.820678  905649 certs.go:194] generating shared ca certs ...
	I0929 12:24:58.820699  905649 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:58.820926  905649 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:24:58.820988  905649 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:24:58.821002  905649 certs.go:256] generating profile certs ...
	I0929 12:24:58.821111  905649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/client.key
	I0929 12:24:58.821198  905649 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.key.d397cfea
	I0929 12:24:58.821246  905649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.key
	I0929 12:24:58.821404  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:24:58.821450  905649 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:24:58.821464  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:24:58.821501  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:24:58.821531  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:24:58.821564  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:24:58.821615  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:24:58.824178  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:24:58.854835  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:24:58.885381  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:24:58.922169  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:24:58.954035  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:24:58.984832  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:24:59.010911  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:24:59.038716  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:24:59.066074  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:24:59.092081  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:24:59.117971  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:24:59.144530  905649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:24:59.163121  905649 ssh_runner.go:195] Run: openssl version
	I0929 12:24:59.168833  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:24:59.178922  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.182635  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.182700  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.189919  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:24:59.201241  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:24:59.211375  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.215068  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.215127  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.222147  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:24:59.231678  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:24:59.242049  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.246376  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.246428  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.253659  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:24:59.263390  905649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:24:59.267282  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:24:59.274371  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:24:59.281316  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:24:59.288070  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:24:59.295169  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:24:59.302222  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:24:59.309049  905649 kubeadm.go:392] StartCluster: {Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
String: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:59.309197  905649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:24:59.329631  905649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:24:59.340419  905649 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:24:59.340443  905649 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:24:59.340499  905649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:24:59.352342  905649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:24:59.354702  905649 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-979136" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:59.355829  905649 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-979136" cluster setting kubeconfig missing "newest-cni-979136" context setting]
	I0929 12:24:59.356906  905649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.358824  905649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:24:59.369732  905649 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0929 12:24:59.369771  905649 kubeadm.go:593] duration metric: took 29.321487ms to restartPrimaryControlPlane
	I0929 12:24:59.369786  905649 kubeadm.go:394] duration metric: took 60.74854ms to StartCluster
	I0929 12:24:59.369807  905649 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.370000  905649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:59.372304  905649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.372523  905649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:24:59.372601  905649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:24:59.372719  905649 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-979136"
	I0929 12:24:59.372746  905649 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-979136"
	I0929 12:24:59.372756  905649 addons.go:69] Setting default-storageclass=true in profile "newest-cni-979136"
	I0929 12:24:59.372756  905649 addons.go:69] Setting metrics-server=true in profile "newest-cni-979136"
	W0929 12:24:59.372774  905649 addons.go:247] addon storage-provisioner should already be in state true
	I0929 12:24:59.372785  905649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-979136"
	I0929 12:24:59.372787  905649 addons.go:238] Setting addon metrics-server=true in "newest-cni-979136"
	I0929 12:24:59.372774  905649 addons.go:69] Setting dashboard=true in profile "newest-cni-979136"
	I0929 12:24:59.372811  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.372828  905649 addons.go:238] Setting addon dashboard=true in "newest-cni-979136"
	W0929 12:24:59.372841  905649 addons.go:247] addon dashboard should already be in state true
	I0929 12:24:59.372868  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:59.372907  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	W0929 12:24:59.372798  905649 addons.go:247] addon metrics-server should already be in state true
	I0929 12:24:59.372999  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.373193  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373362  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373382  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373688  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.374952  905649 out.go:179] * Verifying Kubernetes components...
	I0929 12:24:59.377094  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:59.406520  905649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:24:59.408932  905649 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:24:59.408962  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:24:59.409032  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.410909  905649 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:24:59.410960  905649 addons.go:238] Setting addon default-storageclass=true in "newest-cni-979136"
	W0929 12:24:59.411621  905649 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:24:59.411678  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.412421  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.412644  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:24:59.413305  905649 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:24:59.413569  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.412765  905649 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:24:59.415132  905649 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:24:59.417291  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:24:59.417368  905649 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:24:59.417470  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.445215  905649 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:24:59.446126  905649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:24:59.446304  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.450121  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.452948  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.463761  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.473057  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.521104  905649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:24:59.559358  905649 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:24:59.559440  905649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:24:59.588555  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:24:59.588580  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:24:59.590849  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:24:59.595174  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:24:59.596995  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:24:59.597012  905649 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:24:59.620788  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:24:59.620818  905649 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:24:59.630246  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:24:59.630275  905649 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:24:59.657282  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:24:59.657315  905649 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 12:24:59.661119  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:24:59.661147  905649 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:24:59.685140  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:24:59.685170  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:24:59.687157  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.687205  905649 retry.go:31] will retry after 361.728613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 12:24:59.687243  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.687273  905649 retry.go:31] will retry after 219.336799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.688567  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:24:59.709373  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:24:59.709406  905649 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:24:59.740607  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:24:59.740643  905649 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 12:24:59.768803  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:24:59.768851  905649 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0929 12:24:59.775203  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.775240  905649 retry.go:31] will retry after 332.898484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.796800  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:24:59.796831  905649 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:24:59.821588  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:24:59.821619  905649 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:24:59.847936  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:24:59.907657  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:25:00.049522  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:25:00.060062  905649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:25:00.108553  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:25:01.867904  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.019889375s)
	I0929 12:25:01.867969  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.960182947s)
	I0929 12:25:01.870762  905649 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-979136 addons enable metrics-server
	
	I0929 12:25:02.020400  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.970827565s)
	I0929 12:25:02.020504  905649 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.960409127s)
	I0929 12:25:02.020541  905649 api_server.go:72] duration metric: took 2.647991456s to wait for apiserver process to appear ...
	I0929 12:25:02.020557  905649 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:25:02.020579  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:02.020607  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.912007941s)
	I0929 12:25:02.020637  905649 addons.go:479] Verifying addon metrics-server=true in "newest-cni-979136"
	I0929 12:25:02.022202  905649 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	
	
	==> Docker <==
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.889451316Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:08 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:08.889549913Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:12:08 default-k8s-diff-port-414542 cri-dockerd[1116]: time="2025-09-29T12:12:08Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:12:17 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:17.796584573Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:12:17 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:12:17.827167946Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:17:09 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:09.268056129Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:17:09 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:09.268103168Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:17:09 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:09.270332550Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:17:09 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:09.270376988Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:17:15 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:15.844466396Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:17:15 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:15.890075217Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:17:15 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:15.890185820Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:17:15 default-k8s-diff-port-414542 cri-dockerd[1116]: time="2025-09-29T12:17:15Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:17:24 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:24.800652258Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:17:24 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:17:24.831115623Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:22:19 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:19.844271301Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:22:19 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:19.895466509Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:22:19 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:19.895568805Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:22:19 default-k8s-diff-port-414542 cri-dockerd[1116]: time="2025-09-29T12:22:19Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:22:22 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:22.677607933Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:22:22 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:22.677646163Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:22:22 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:22.679751254Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:22:22 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:22.679786518Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 12:22:38 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:38.799743460Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:22:38 default-k8s-diff-port-414542 dockerd[805]: time="2025-09-29T12:22:38.834421175Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3e8ebd1a20bfc       6e38f40d628db                                                                                         18 minutes ago      Running             storage-provisioner       2                   a36e40e4be015       storage-provisioner
	780d293abd667       56cc512116c8f                                                                                         18 minutes ago      Running             busybox                   1                   c52f1bc00aa92       busybox
	4a3ca81fe2f1a       52546a367cc9e                                                                                         18 minutes ago      Running             coredns                   1                   bd94f1800e4a3       coredns-66bc5c9577-zqqdn
	f8587a790c480       6e38f40d628db                                                                                         18 minutes ago      Exited              storage-provisioner       1                   a36e40e4be015       storage-provisioner
	12638a28f3092       df0860106674d                                                                                         18 minutes ago      Running             kube-proxy                1                   cd6249d9b3faa       kube-proxy-bspjk
	7d541696821e3       46169d968e920                                                                                         18 minutes ago      Running             kube-scheduler            1                   cc91534300045       kube-scheduler-default-k8s-diff-port-414542
	d91e30763cb74       90550c43ad2bc                                                                                         18 minutes ago      Running             kube-apiserver            1                   d6b4d97a3c8cf       kube-apiserver-default-k8s-diff-port-414542
	cfcc3c32a6429       a0af72f2ec6d6                                                                                         18 minutes ago      Running             kube-controller-manager   1                   6cdca3ea59f62       kube-controller-manager-default-k8s-diff-port-414542
	63101e5318f49       5f1f5298c888d                                                                                         18 minutes ago      Running             etcd                      1                   10a4673adc5bc       etcd-default-k8s-diff-port-414542
	47156dda7bdb0       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              busybox                   0                   97f983e609474       busybox
	a418c15537b4f       52546a367cc9e                                                                                         19 minutes ago      Exited              coredns                   0                   8b4c4bb9b075f       coredns-66bc5c9577-zqqdn
	cf88e0ff6e4c5       df0860106674d                                                                                         19 minutes ago      Exited              kube-proxy                0                   2653c32e79939       kube-proxy-bspjk
	f12fc2b57d5c7       a0af72f2ec6d6                                                                                         19 minutes ago      Exited              kube-controller-manager   0                   fa259aa7113b7       kube-controller-manager-default-k8s-diff-port-414542
	c052b7974c71e       90550c43ad2bc                                                                                         19 minutes ago      Exited              kube-apiserver            0                   cda4e6ba82c43       kube-apiserver-default-k8s-diff-port-414542
	7be81117198c4       46169d968e920                                                                                         19 minutes ago      Exited              kube-scheduler            0                   1f4e115702e59       kube-scheduler-default-k8s-diff-port-414542
	289ff9fbcded6       5f1f5298c888d                                                                                         19 minutes ago      Exited              etcd                      0                   1ce2a65f82bd4       etcd-default-k8s-diff-port-414542
	
	
	==> coredns [4a3ca81fe2f1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43464 - 46009 "HINFO IN 1513859665036013232.7870983957954654933. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.021421812s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [a418c15537b4] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:45638 - 13643 "HINFO IN 4710081106409396512.4132293983694253617. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.048326747s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-414542
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-414542
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=default-k8s-diff-port-414542
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_05_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:05:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-414542
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:24:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:23:39 +0000   Mon, 29 Sep 2025 12:05:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:23:39 +0000   Mon, 29 Sep 2025 12:05:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:23:39 +0000   Mon, 29 Sep 2025 12:05:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:23:39 +0000   Mon, 29 Sep 2025 12:05:32 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    default-k8s-diff-port-414542
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 bcfa6945434c4edbae65e29ccc26141f
	  System UUID:                c9dfe7da-7478-4379-bb83-cc78f009c0b7
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-zqqdn                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-default-k8s-diff-port-414542                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-414542             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-414542    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-bspjk                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-414542             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-btxhj                         100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-k7qd7              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-cxjff                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node default-k8s-diff-port-414542 event: Registered Node default-k8s-diff-port-414542 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-414542 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node default-k8s-diff-port-414542 event: Registered Node default-k8s-diff-port-414542 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	[Sep29 12:24] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 8f 0c 17 b8 91 08 06
	[Sep29 12:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 5f 3c 17 4f d8 08 06
	
	
	==> etcd [289ff9fbcded] <==
	{"level":"warn","ts":"2025-09-29T12:05:31.252941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.259467Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.266281Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.281642Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.288433Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.295021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:31.345896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33912","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:05:56.093205Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:05:56.093289Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-414542","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:05:56.093396Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:03.094986Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:03.095100Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095138Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095233Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:06:03.095195Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"error","ts":"2025-09-29T12:06:03.095248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095194Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:03.095264Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:06:03.095272Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:03.095274Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T12:06:03.095287Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T12:06:03.098049Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-09-29T12:06:03.098106Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:03.098134Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T12:06:03.098143Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-414542","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> etcd [63101e5318f4] <==
	{"level":"warn","ts":"2025-09-29T12:06:16.437462Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.445601Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.453286Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.460568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.468241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53952","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.476793Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.488562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.498821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.501343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54044","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.508194Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.514619Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.521503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.528455Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.535327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.541912Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54174","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.554691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54188","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.561565Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.568526Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:16.625924Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:16:16.057729Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1028}
	{"level":"info","ts":"2025-09-29T12:16:16.076365Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1028,"took":"18.250725ms","hash":544860740,"current-db-size-bytes":3145728,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1245184,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2025-09-29T12:16:16.076410Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":544860740,"revision":1028,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T12:21:16.063104Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1284}
	{"level":"info","ts":"2025-09-29T12:21:16.065850Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1284,"took":"2.429285ms","hash":633140514,"current-db-size-bytes":3145728,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1798144,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T12:21:16.065919Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":633140514,"revision":1284,"compact-revision":1028}
	
	
	==> kernel <==
	 12:25:02 up  2:07,  0 users,  load average: 1.22, 0.86, 1.56
	Linux default-k8s-diff-port-414542 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [c052b7974c71] <==
	W0929 12:06:05.284230       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.301748       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.349820       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.349827       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.387964       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.448541       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.459070       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.476680       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.486116       1 logging.go:55] [core] [Channel #2 SubChannel #5]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.507000       1 logging.go:55] [core] [Channel #1 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.523461       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.628753       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.645426       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.693281       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.704865       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.717987       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.770277       1 logging.go:55] [core] [Channel #139 SubChannel #141]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.771501       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.915775       1 logging.go:55] [core] [Channel #195 SubChannel #197]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.931414       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.948789       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:05.997933       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:06.020699       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:06.032124       1 logging.go:55] [core] [Channel #207 SubChannel #209]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:06.055743       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [d91e30763cb7] <==
	I0929 12:21:18.143639       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:21:18.995507       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:22:18.143132       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:22:18.143195       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:22:18.143212       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:22:18.144214       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:22:18.144300       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:22:18.144315       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:22:33.004665       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:22:44.159508       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:23:45.135082       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:23:56.994970       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:24:18.144024       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:24:18.144092       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:24:18.144106       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:24:18.145076       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:24:18.145190       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:24:18.145209       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:25:00.648290       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [cfcc3c32a642] <==
	I0929 12:18:50.670124       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:19:20.590253       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:19:20.677599       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:19:50.594341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:19:50.684179       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:20:20.599028       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:20:20.690839       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:20:50.603627       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:20:50.698190       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:21:20.608466       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:21:20.704608       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:21:50.613592       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:21:50.712544       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:22:20.617584       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:22:20.719717       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:22:50.622937       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:22:50.727285       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:23:20.627388       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:23:20.733544       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:23:50.631024       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:23:50.740373       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:24:20.636078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:24:20.748740       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:24:50.641224       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:24:50.758473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [f12fc2b57d5c] <==
	I0929 12:05:38.762442       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:05:38.762732       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 12:05:38.762793       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 12:05:38.762821       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:05:38.763407       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 12:05:38.763671       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:05:38.764642       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 12:05:38.764691       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 12:05:38.764731       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:05:38.764821       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-414542"
	I0929 12:05:38.764894       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:05:38.764866       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:05:38.765771       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 12:05:38.765807       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 12:05:38.767128       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0929 12:05:38.768695       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:05:38.768758       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:05:38.768807       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:05:38.768817       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:05:38.768830       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:05:38.772251       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:05:38.772652       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:38.778967       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-414542" podCIDRs=["10.244.0.0/24"]
	I0929 12:05:38.781010       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 12:05:38.792415       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [12638a28f309] <==
	I0929 12:06:18.497579       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:06:18.555564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:06:18.655767       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:06:18.655808       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 12:06:18.655988       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:06:18.678567       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:06:18.678633       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:06:18.684359       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:06:18.684687       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:06:18.684703       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:18.685852       1 config.go:309] "Starting node config controller"
	I0929 12:06:18.685912       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:06:18.685922       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:06:18.685959       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:06:18.685984       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:06:18.686049       1 config.go:200] "Starting service config controller"
	I0929 12:06:18.686180       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:06:18.686123       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:06:18.686237       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:06:18.786828       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:06:18.786850       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:06:18.786926       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [cf88e0ff6e4c] <==
	I0929 12:05:40.276143       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:05:40.344682       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:05:40.445659       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:05:40.445710       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 12:05:40.447009       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:05:40.475794       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:40.475915       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:05:40.482629       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:05:40.484328       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:05:40.484449       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:40.489656       1 config.go:200] "Starting service config controller"
	I0929 12:05:40.489678       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:05:40.489705       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:05:40.489710       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:05:40.489798       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:05:40.489810       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:05:40.493045       1 config.go:309] "Starting node config controller"
	I0929 12:05:40.493111       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:05:40.493139       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:05:40.590636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:05:40.590694       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:05:40.591139       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [7be81117198c] <==
	E0929 12:05:31.775908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:31.776086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:05:31.776117       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:31.776139       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:31.776269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:31.776378       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:31.776389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:05:31.776630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:05:31.777273       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:32.697709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:05:32.760127       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:05:32.761947       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 12:05:32.788573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:32.814766       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:32.861075       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:05:32.871260       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:33.022943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:33.037096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:33.075271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:05:35.872675       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:05:56.078809       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:05:56.078855       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:05:56.079248       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:05:56.079369       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:05:56.079394       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [7d541696821e] <==
	I0929 12:06:16.084120       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:06:17.082131       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:06:17.082258       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:06:17.082293       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:06:17.082345       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:06:17.127857       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:06:17.127900       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:17.132953       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:17.133130       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:17.133273       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:06:17.133353       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:06:17.235448       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 12:23:20 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:20.782113    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:23:22 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:22.782196    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:23:25 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:25.782463    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:23:31 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:31.782183    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:23:34 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:34.782993    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:23:36 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:36.782619    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:23:43 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:43.781505    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:23:47 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:47.782354    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:23:47 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:47.782414    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:23:54 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:54.782285    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:23:59 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:23:59.782091    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:24:00 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:00.782994    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:24:09 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:09.782647    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:24:11 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:11.782688    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:24:15 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:15.782632    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:24:21 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:21.782295    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:24:22 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:22.783112    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:24:26 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:26.782360    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:24:36 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:36.782853    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:24:36 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:36.782938    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:24:38 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:38.783218    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:24:47 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:47.781991    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	Sep 29 12:24:50 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:50.782064    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-btxhj" podUID="704e9868-4eca-4392-ab18-e672c65eeea7"
	Sep 29 12:24:53 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:24:53.782452    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-k7qd7" podUID="b365ec77-d7a3-41aa-bb95-064352d7687b"
	Sep 29 12:25:02 default-k8s-diff-port-414542 kubelet[1345]: E0929 12:25:02.782725    1345 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-cxjff" podUID="3e3d7969-3840-4382-aed3-5a0078b5c059"
	
	
	==> storage-provisioner [3e8ebd1a20bf] <==
	W0929 12:24:37.593966       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:39.597137       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:39.601706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:41.604994       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:41.609380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:43.612566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:43.617027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:45.620854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:45.627088       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:47.629944       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:47.634576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:49.638120       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:49.643836       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:51.647224       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:51.651372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:53.654909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:53.661007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:55.664716       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:55.669182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:57.672445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:57.676571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:59.680271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:59.688007       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:01.691298       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:01.701938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f8587a790c48] <==
	I0929 12:06:18.473754       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:06:48.478215       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 describe pod metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-414542 describe pod metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff: exit status 1 (66.149348ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-btxhj" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-k7qd7" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-cxjff" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-414542 describe pod metrics-server-746fcd58dc-btxhj dashboard-metrics-scraper-6ffb444bf9-k7qd7 kubernetes-dashboard-855c9754f9-cxjff: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-l9zp7" [3644e7d0-9ed1-4318-b46e-d6c46932ae65] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:16:17.326833  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031687 -n embed-certs-031687
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:25:14.889157109 +0000 UTC m=+4396.932969207
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-031687 describe po kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-031687 describe po kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-l9zp7
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-031687/192.168.76.2
Start Time:       Mon, 29 Sep 2025 12:06:38 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-k4mkh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-k4mkh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7 to embed-certs-031687
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m28s (x66 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m28s (x66 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-031687 logs kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-031687 logs kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard: exit status 1 (74.901666ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-l9zp7" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-031687 logs kubernetes-dashboard-855c9754f9-l9zp7 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-031687 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-031687
helpers_test.go:243: (dbg) docker inspect embed-certs-031687:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04",
	        "Created": "2025-09-29T12:05:01.102607645Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 866700,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:06:24.05064812Z",
	            "FinishedAt": "2025-09-29T12:06:23.219404934Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/hostname",
	        "HostsPath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/hosts",
	        "LogPath": "/var/lib/docker/containers/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04/e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04-json.log",
	        "Name": "/embed-certs-031687",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-031687:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-031687",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e4f6355ca9ce00ebd6cdbb824fc87d2924773aa8ea0e986539aa158c806dee04",
	                "LowerDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90/merged",
	                "UpperDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90/diff",
	                "WorkDir": "/var/lib/docker/overlay2/998b5ac965ecfd37fdc19422783a57b67430225be76307a031e81a6367d9ae90/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-031687",
	                "Source": "/var/lib/docker/volumes/embed-certs-031687/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-031687",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-031687",
	                "name.minikube.sigs.k8s.io": "embed-certs-031687",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8cfd1fe5476ded4503d7cb9d88249e773444e93173c3f2a335f7be1b4bde0bc8",
	            "SandboxKey": "/var/run/docker/netns/8cfd1fe5476d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33518"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33519"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33522"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33520"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33521"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-031687": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:76:b8:93:d7:3f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "bcd2926a5ec53b938330bde349b95cf914c53ca94ae1c2f503c01a3cdcda13e2",
	                    "EndpointID": "16d33f17954c4be00250a5728ca37721615d4dc68bfaf37d18d59cb5ac36f637",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-031687",
	                        "e4f6355ca9ce"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031687 -n embed-certs-031687
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-031687 logs -n 25
E0929 12:25:15.769195  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-031687 logs -n 25: (1.056138155s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ start   │ -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ addons  │ enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ stop    │ -p no-preload-306088 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ addons  │ enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:06 UTC │
	│ start   │ -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-306088            │ jenkins │ v1.37.0 │ 29 Sep 25 12:06 UTC │ 29 Sep 25 12:07 UTC │
	│ image   │ old-k8s-version-858855 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ pause   │ -p old-k8s-version-858855 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ unpause │ -p old-k8s-version-858855 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ delete  │ -p old-k8s-version-858855                                                                                                                                                                                                                       │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ delete  │ -p old-k8s-version-858855                                                                                                                                                                                                                       │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ start   │ -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-979136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ stop    │ -p newest-cni-979136 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-979136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ start   │ -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:25 UTC │
	│ image   │ default-k8s-diff-port-414542 image list --format=json                                                                                                                                                                                           │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ image   │ newest-cni-979136 image list --format=json                                                                                                                                                                                                      │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ pause   │ -p default-k8s-diff-port-414542 --alsologtostderr -v=1                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ pause   │ -p newest-cni-979136 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ unpause │ -p default-k8s-diff-port-414542 --alsologtostderr -v=1                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ unpause │ -p newest-cni-979136 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p default-k8s-diff-port-414542                                                                                                                                                                                                                 │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p newest-cni-979136                                                                                                                                                                                                                            │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p default-k8s-diff-port-414542                                                                                                                                                                                                                 │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p newest-cni-979136                                                                                                                                                                                                                            │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:24:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:24:51.027836  905649 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:24:51.028162  905649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.028175  905649 out.go:374] Setting ErrFile to fd 2...
	I0929 12:24:51.028179  905649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.028374  905649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:24:51.029337  905649 out.go:368] Setting JSON to false
	I0929 12:24:51.030825  905649 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7635,"bootTime":1759141056,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:24:51.030968  905649 start.go:140] virtualization: kvm guest
	I0929 12:24:51.032783  905649 out.go:179] * [newest-cni-979136] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:24:51.034019  905649 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:24:51.034055  905649 notify.go:220] Checking for updates...
	I0929 12:24:51.036459  905649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:24:51.037859  905649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:51.039082  905649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:24:51.040311  905649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:24:51.041587  905649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:24:51.043195  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:51.043728  905649 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:24:51.068175  905649 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:24:51.068255  905649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:51.123146  905649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:24:51.112794792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:51.123257  905649 docker.go:318] overlay module found
	I0929 12:24:51.125091  905649 out.go:179] * Using the docker driver based on existing profile
	I0929 12:24:51.126326  905649 start.go:304] selected driver: docker
	I0929 12:24:51.126339  905649 start.go:924] validating driver "docker" against &{Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:51.126430  905649 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:24:51.127121  905649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:51.186671  905649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:24:51.176838416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:51.187052  905649 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 12:24:51.187093  905649 cni.go:84] Creating CNI manager for ""
	I0929 12:24:51.187164  905649 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:24:51.187225  905649 start.go:348] cluster config:
	{Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:51.189168  905649 out.go:179] * Starting "newest-cni-979136" primary control-plane node in "newest-cni-979136" cluster
	I0929 12:24:51.190349  905649 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:24:51.192465  905649 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:24:51.193503  905649 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:24:51.193547  905649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:24:51.193547  905649 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 12:24:51.193585  905649 cache.go:58] Caching tarball of preloaded images
	I0929 12:24:51.193693  905649 preload.go:172] Found /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 12:24:51.193704  905649 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 12:24:51.193824  905649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/config.json ...
	I0929 12:24:51.214508  905649 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:24:51.214530  905649 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:24:51.214551  905649 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:24:51.214581  905649 start.go:360] acquireMachinesLock for newest-cni-979136: {Name:mkc9e89421b142ce40f5cb759383c5450ffdf976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:24:51.214640  905649 start.go:364] duration metric: took 37.274µs to acquireMachinesLock for "newest-cni-979136"
	I0929 12:24:51.214660  905649 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:24:51.214665  905649 fix.go:54] fixHost starting: 
	I0929 12:24:51.214885  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:51.232065  905649 fix.go:112] recreateIfNeeded on newest-cni-979136: state=Stopped err=<nil>
	W0929 12:24:51.232092  905649 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 12:24:51.234018  905649 out.go:252] * Restarting existing docker container for "newest-cni-979136" ...
	I0929 12:24:51.234081  905649 cli_runner.go:164] Run: docker start newest-cni-979136
	I0929 12:24:51.475044  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:51.494168  905649 kic.go:430] container "newest-cni-979136" state is running.
	I0929 12:24:51.494681  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:51.514623  905649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/config.json ...
	I0929 12:24:51.514852  905649 machine.go:93] provisionDockerMachine start ...
	I0929 12:24:51.514945  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:51.533238  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:51.533491  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:51.533504  905649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:24:51.534277  905649 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55270->127.0.0.1:33533: read: connection reset by peer
	I0929 12:24:54.676970  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-979136
	
	I0929 12:24:54.677005  905649 ubuntu.go:182] provisioning hostname "newest-cni-979136"
	I0929 12:24:54.677081  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:54.695975  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:54.696244  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:54.696263  905649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-979136 && echo "newest-cni-979136" | sudo tee /etc/hostname
	I0929 12:24:54.848177  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-979136
	
	I0929 12:24:54.848263  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:54.868568  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:54.868809  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:54.868828  905649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-979136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-979136/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-979136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:24:55.006440  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:24:55.006486  905649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:24:55.006506  905649 ubuntu.go:190] setting up certificates
	I0929 12:24:55.006518  905649 provision.go:84] configureAuth start
	I0929 12:24:55.006580  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:55.025054  905649 provision.go:143] copyHostCerts
	I0929 12:24:55.025121  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:24:55.025140  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:24:55.025215  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:24:55.025317  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:24:55.025326  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:24:55.025353  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:24:55.025420  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:24:55.025427  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:24:55.025450  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:24:55.025513  905649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.newest-cni-979136 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-979136]
	I0929 12:24:55.243153  905649 provision.go:177] copyRemoteCerts
	I0929 12:24:55.243218  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:24:55.243264  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.263249  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:55.364291  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:24:55.389609  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:24:55.415500  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:24:55.440533  905649 provision.go:87] duration metric: took 434.000782ms to configureAuth
	I0929 12:24:55.440563  905649 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:24:55.440758  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:55.440818  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.460318  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.460729  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.460755  905649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:24:55.597583  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:24:55.597610  905649 ubuntu.go:71] root file system type: overlay
	I0929 12:24:55.597747  905649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:24:55.597807  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.619201  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.619420  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.619486  905649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:24:55.771605  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:24:55.771704  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.790052  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.790282  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.790300  905649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:24:55.932301  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:24:55.932334  905649 machine.go:96] duration metric: took 4.417466701s to provisionDockerMachine
	I0929 12:24:55.932351  905649 start.go:293] postStartSetup for "newest-cni-979136" (driver="docker")
	I0929 12:24:55.932365  905649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:24:55.932465  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:24:55.932550  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.954244  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.052000  905649 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:24:56.055711  905649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:24:56.055754  905649 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:24:56.055765  905649 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:24:56.055774  905649 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:24:56.055787  905649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:24:56.055831  905649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:24:56.055972  905649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:24:56.056075  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:24:56.065385  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:24:56.090181  905649 start.go:296] duration metric: took 157.792312ms for postStartSetup
	I0929 12:24:56.090268  905649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:24:56.090315  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.109744  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.202986  905649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:24:56.207666  905649 fix.go:56] duration metric: took 4.992992519s for fixHost
	I0929 12:24:56.207696  905649 start.go:83] releasing machines lock for "newest-cni-979136", held for 4.993042953s
	I0929 12:24:56.207761  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:56.225816  905649 ssh_runner.go:195] Run: cat /version.json
	I0929 12:24:56.225856  905649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:24:56.225890  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.225953  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.243859  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.245388  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.410148  905649 ssh_runner.go:195] Run: systemctl --version
	I0929 12:24:56.415184  905649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:24:56.419735  905649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:24:56.439126  905649 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:24:56.439194  905649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:24:56.448391  905649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:24:56.448426  905649 start.go:495] detecting cgroup driver to use...
	I0929 12:24:56.448461  905649 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:24:56.448625  905649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:24:56.465656  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:24:56.476251  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:24:56.486622  905649 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:24:56.486697  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:24:56.497049  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:56.507303  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:24:56.517167  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:56.527790  905649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:24:56.537523  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:24:56.548028  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:24:56.558377  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:24:56.568281  905649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:24:56.577443  905649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:24:56.586851  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:56.660866  905649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:24:56.741769  905649 start.go:495] detecting cgroup driver to use...
	I0929 12:24:56.741823  905649 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:24:56.741899  905649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:24:56.755292  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:24:56.767224  905649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:24:56.786855  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:24:56.799497  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:24:56.811529  905649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:24:56.829453  905649 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:24:56.833521  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:24:56.842646  905649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:24:56.860977  905649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:24:56.931377  905649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:24:57.001000  905649 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:24:57.001140  905649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:24:57.020740  905649 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:24:57.032094  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:57.102971  905649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:24:57.943232  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:24:57.958776  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:24:57.970760  905649 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:24:57.983315  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:24:57.994666  905649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:24:58.061628  905649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:24:58.131372  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.196002  905649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:24:58.216042  905649 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:24:58.227496  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.296813  905649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:24:58.382030  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:24:58.396219  905649 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:24:58.396294  905649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:24:58.400678  905649 start.go:563] Will wait 60s for crictl version
	I0929 12:24:58.400758  905649 ssh_runner.go:195] Run: which crictl
	I0929 12:24:58.404435  905649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:24:58.440974  905649 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:24:58.441049  905649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:24:58.466313  905649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:24:58.495007  905649 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:24:58.495109  905649 cli_runner.go:164] Run: docker network inspect newest-cni-979136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:24:58.513187  905649 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0929 12:24:58.517404  905649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:24:58.531305  905649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0929 12:24:58.532547  905649 kubeadm.go:875] updating cluster {Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:24:58.532682  905649 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:24:58.532746  905649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:24:58.553550  905649 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:24:58.553578  905649 docker.go:621] Images already preloaded, skipping extraction
	I0929 12:24:58.553660  905649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:24:58.574817  905649 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:24:58.574850  905649 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:24:58.574864  905649 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 docker true true} ...
	I0929 12:24:58.575035  905649 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-979136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:24:58.575101  905649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:24:58.629742  905649 cni.go:84] Creating CNI manager for ""
	I0929 12:24:58.629778  905649 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:24:58.629793  905649 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0929 12:24:58.629820  905649 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-979136 NodeName:newest-cni-979136 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:24:58.630059  905649 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-979136"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:24:58.630139  905649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:24:58.640481  905649 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:24:58.640539  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:24:58.650199  905649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0929 12:24:58.670388  905649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:24:58.690755  905649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0929 12:24:58.710213  905649 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:24:58.714041  905649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:24:58.726275  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.797764  905649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:24:58.820648  905649 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136 for IP: 192.168.103.2
	I0929 12:24:58.820678  905649 certs.go:194] generating shared ca certs ...
	I0929 12:24:58.820699  905649 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:58.820926  905649 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:24:58.820988  905649 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:24:58.821002  905649 certs.go:256] generating profile certs ...
	I0929 12:24:58.821111  905649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/client.key
	I0929 12:24:58.821198  905649 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.key.d397cfea
	I0929 12:24:58.821246  905649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.key
	I0929 12:24:58.821404  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:24:58.821450  905649 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:24:58.821464  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:24:58.821501  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:24:58.821531  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:24:58.821564  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:24:58.821615  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:24:58.824178  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:24:58.854835  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:24:58.885381  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:24:58.922169  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:24:58.954035  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:24:58.984832  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:24:59.010911  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:24:59.038716  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:24:59.066074  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:24:59.092081  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:24:59.117971  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:24:59.144530  905649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:24:59.163121  905649 ssh_runner.go:195] Run: openssl version
	I0929 12:24:59.168833  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:24:59.178922  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.182635  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.182700  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.189919  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:24:59.201241  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:24:59.211375  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.215068  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.215127  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.222147  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:24:59.231678  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:24:59.242049  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.246376  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.246428  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.253659  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:24:59.263390  905649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:24:59.267282  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:24:59.274371  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:24:59.281316  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:24:59.288070  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:24:59.295169  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:24:59.302222  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:24:59.309049  905649 kubeadm.go:392] StartCluster: {Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
String: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:59.309197  905649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:24:59.329631  905649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:24:59.340419  905649 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:24:59.340443  905649 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:24:59.340499  905649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:24:59.352342  905649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:24:59.354702  905649 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-979136" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:59.355829  905649 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-979136" cluster setting kubeconfig missing "newest-cni-979136" context setting]
	I0929 12:24:59.356906  905649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.358824  905649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:24:59.369732  905649 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0929 12:24:59.369771  905649 kubeadm.go:593] duration metric: took 29.321487ms to restartPrimaryControlPlane
	I0929 12:24:59.369786  905649 kubeadm.go:394] duration metric: took 60.74854ms to StartCluster
	I0929 12:24:59.369807  905649 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.370000  905649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:59.372304  905649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.372523  905649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:24:59.372601  905649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:24:59.372719  905649 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-979136"
	I0929 12:24:59.372746  905649 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-979136"
	I0929 12:24:59.372756  905649 addons.go:69] Setting default-storageclass=true in profile "newest-cni-979136"
	I0929 12:24:59.372756  905649 addons.go:69] Setting metrics-server=true in profile "newest-cni-979136"
	W0929 12:24:59.372774  905649 addons.go:247] addon storage-provisioner should already be in state true
	I0929 12:24:59.372785  905649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-979136"
	I0929 12:24:59.372787  905649 addons.go:238] Setting addon metrics-server=true in "newest-cni-979136"
	I0929 12:24:59.372774  905649 addons.go:69] Setting dashboard=true in profile "newest-cni-979136"
	I0929 12:24:59.372811  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.372828  905649 addons.go:238] Setting addon dashboard=true in "newest-cni-979136"
	W0929 12:24:59.372841  905649 addons.go:247] addon dashboard should already be in state true
	I0929 12:24:59.372868  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:59.372907  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	W0929 12:24:59.372798  905649 addons.go:247] addon metrics-server should already be in state true
	I0929 12:24:59.372999  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.373193  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373362  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373382  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373688  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.374952  905649 out.go:179] * Verifying Kubernetes components...
	I0929 12:24:59.377094  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:59.406520  905649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:24:59.408932  905649 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:24:59.408962  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:24:59.409032  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.410909  905649 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:24:59.410960  905649 addons.go:238] Setting addon default-storageclass=true in "newest-cni-979136"
	W0929 12:24:59.411621  905649 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:24:59.411678  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.412421  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.412644  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:24:59.413305  905649 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:24:59.413569  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.412765  905649 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:24:59.415132  905649 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:24:59.417291  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:24:59.417368  905649 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:24:59.417470  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.445215  905649 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:24:59.446126  905649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:24:59.446304  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.450121  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.452948  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.463761  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.473057  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.521104  905649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:24:59.559358  905649 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:24:59.559440  905649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:24:59.588555  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:24:59.588580  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:24:59.590849  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:24:59.595174  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:24:59.596995  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:24:59.597012  905649 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:24:59.620788  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:24:59.620818  905649 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:24:59.630246  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:24:59.630275  905649 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:24:59.657282  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:24:59.657315  905649 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 12:24:59.661119  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:24:59.661147  905649 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:24:59.685140  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:24:59.685170  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:24:59.687157  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.687205  905649 retry.go:31] will retry after 361.728613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 12:24:59.687243  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.687273  905649 retry.go:31] will retry after 219.336799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.688567  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:24:59.709373  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:24:59.709406  905649 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:24:59.740607  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:24:59.740643  905649 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 12:24:59.768803  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:24:59.768851  905649 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0929 12:24:59.775203  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.775240  905649 retry.go:31] will retry after 332.898484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.796800  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:24:59.796831  905649 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:24:59.821588  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:24:59.821619  905649 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:24:59.847936  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:24:59.907657  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:25:00.049522  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:25:00.060062  905649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:25:00.108553  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:25:01.867904  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.019889375s)
	I0929 12:25:01.867969  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.960182947s)
	I0929 12:25:01.870762  905649 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-979136 addons enable metrics-server
	
	I0929 12:25:02.020400  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.970827565s)
	I0929 12:25:02.020504  905649 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.960409127s)
	I0929 12:25:02.020541  905649 api_server.go:72] duration metric: took 2.647991456s to wait for apiserver process to appear ...
	I0929 12:25:02.020557  905649 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:25:02.020579  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:02.020607  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.912007941s)
	I0929 12:25:02.020637  905649 addons.go:479] Verifying addon metrics-server=true in "newest-cni-979136"
	I0929 12:25:02.022202  905649 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	I0929 12:25:02.023704  905649 addons.go:514] duration metric: took 2.651123895s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:25:02.026548  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:02.026573  905649 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:02.521050  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:02.527087  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:02.527120  905649 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:03.020698  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:03.025513  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:03.025540  905649 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:03.521035  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:03.525634  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0929 12:25:03.526669  905649 api_server.go:141] control plane version: v1.34.0
	I0929 12:25:03.526694  905649 api_server.go:131] duration metric: took 1.506128439s to wait for apiserver health ...
	I0929 12:25:03.526707  905649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:25:03.530192  905649 system_pods.go:59] 8 kube-system pods found
	I0929 12:25:03.530230  905649 system_pods.go:61] "coredns-66bc5c9577-gk5jp" [5541b17e-3975-4fda-a1e6-4fb4228931c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:25:03.530241  905649 system_pods.go:61] "etcd-newest-cni-979136" [7b81140b-0f04-45e5-af0e-297e6a11f50c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:25:03.530256  905649 system_pods.go:61] "kube-apiserver-newest-cni-979136" [d4665571-5d4f-409f-8d40-88dbd632ab57] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:25:03.530267  905649 system_pods.go:61] "kube-controller-manager-newest-cni-979136" [ce7c8835-111b-4bd1-997f-42bc7d8d43a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:25:03.530276  905649 system_pods.go:61] "kube-proxy-xksn2" [27ad0cb4-d548-4e8d-8d9e-64fad85f4633] Running
	I0929 12:25:03.530284  905649 system_pods.go:61] "kube-scheduler-newest-cni-979136" [832d940b-8c48-4e49-9ebe-ef6dd51b0f02] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:25:03.530294  905649 system_pods.go:61] "metrics-server-746fcd58dc-kl6rh" [52b32e3f-94ff-4bbd-aa81-1c106f59614e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:25:03.530299  905649 system_pods.go:61] "storage-provisioner" [6a60b242-8cbe-48c8-a86d-03b43412482c] Running
	I0929 12:25:03.530313  905649 system_pods.go:74] duration metric: took 3.599496ms to wait for pod list to return data ...
	I0929 12:25:03.530326  905649 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:25:03.532656  905649 default_sa.go:45] found service account: "default"
	I0929 12:25:03.532682  905649 default_sa.go:55] duration metric: took 2.346917ms for default service account to be created ...
	I0929 12:25:03.532698  905649 kubeadm.go:578] duration metric: took 4.160147155s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 12:25:03.532726  905649 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:25:03.534982  905649 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:25:03.535004  905649 node_conditions.go:123] node cpu capacity is 8
	I0929 12:25:03.535020  905649 node_conditions.go:105] duration metric: took 2.285053ms to run NodePressure ...
	I0929 12:25:03.535039  905649 start.go:241] waiting for startup goroutines ...
	I0929 12:25:03.535053  905649 start.go:246] waiting for cluster config update ...
	I0929 12:25:03.535070  905649 start.go:255] writing updated cluster config ...
	I0929 12:25:03.535347  905649 ssh_runner.go:195] Run: rm -f paused
	I0929 12:25:03.594962  905649 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:25:03.597006  905649 out.go:179] * Done! kubectl is now configured to use "newest-cni-979136" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.032768936Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.091938083Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.142342471Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:31 embed-certs-031687 dockerd[822]: time="2025-09-29T12:12:31.142457472Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:12:31 embed-certs-031687 cri-dockerd[1137]: time="2025-09-29T12:12:31Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:17:30 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:30.928175851Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:17:30 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:30.960622480Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:17:33 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:33.966581082Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:17:34 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:34.022834921Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:17:34 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:34.023001918Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:17:34 embed-certs-031687 cri-dockerd[1137]: time="2025-09-29T12:17:34Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:17:40 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:40.403494915Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:17:40 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:40.403533143Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:17:40 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:40.405432946Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:17:40 embed-certs-031687 dockerd[822]: time="2025-09-29T12:17:40.405469165Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:22:38 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:38.923106613Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:22:38 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:38.949562927Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:22:39 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:39.971570811Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:22:40 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:40.015641860Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:22:40 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:40.015729417Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:22:40 embed-certs-031687 cri-dockerd[1137]: time="2025-09-29T12:22:40Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:22:49 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:49.413284753Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:22:49 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:49.413344037Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 12:22:49 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:49.415571362Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:22:49 embed-certs-031687 dockerd[822]: time="2025-09-29T12:22:49.415623790Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	7cfd570c5c36d       6e38f40d628db                                                                                         17 minutes ago      Running             storage-provisioner       2                   4af3b4b1eeadf       storage-provisioner
	1bb6a696f26fe       56cc512116c8f                                                                                         18 minutes ago      Running             busybox                   1                   d03c130d10ba2       busybox
	65f801ea60ac6       52546a367cc9e                                                                                         18 minutes ago      Running             coredns                   1                   4da192a695c28       coredns-66bc5c9577-h49hh
	45741390c4acf       df0860106674d                                                                                         18 minutes ago      Running             kube-proxy                1                   c55e5d2f2ec55       kube-proxy-8lx97
	cd9c371dd7393       6e38f40d628db                                                                                         18 minutes ago      Exited              storage-provisioner       1                   4af3b4b1eeadf       storage-provisioner
	27f5ea637472f       5f1f5298c888d                                                                                         18 minutes ago      Running             etcd                      1                   a0608b8f66091       etcd-embed-certs-031687
	916456bc8bfb2       a0af72f2ec6d6                                                                                         18 minutes ago      Running             kube-controller-manager   1                   96caa39c99c65       kube-controller-manager-embed-certs-031687
	312c71e7e1091       90550c43ad2bc                                                                                         18 minutes ago      Running             kube-apiserver            1                   658dff92c25e2       kube-apiserver-embed-certs-031687
	468b88a7167c9       46169d968e920                                                                                         18 minutes ago      Running             kube-scheduler            1                   4ac0078e59a95       kube-scheduler-embed-certs-031687
	9d0e4dcfe570e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              busybox                   0                   fd46c3ca34837       busybox
	3a1ef2c419226       52546a367cc9e                                                                                         19 minutes ago      Exited              coredns                   0                   ed4fce6488e6d       coredns-66bc5c9577-h49hh
	b0b17b7d55279       df0860106674d                                                                                         19 minutes ago      Exited              kube-proxy                0                   8f6ae65849b90       kube-proxy-8lx97
	0f7e04b4b32c9       a0af72f2ec6d6                                                                                         19 minutes ago      Exited              kube-controller-manager   0                   d0eee0a7fb6d8       kube-controller-manager-embed-certs-031687
	f99b1cd1736c0       90550c43ad2bc                                                                                         19 minutes ago      Exited              kube-apiserver            0                   90e66d4ed1426       kube-apiserver-embed-certs-031687
	9c9d110cd2307       5f1f5298c888d                                                                                         19 minutes ago      Exited              etcd                      0                   87c3183dd5d82       etcd-embed-certs-031687
	90223f818ad9b       46169d968e920                                                                                         19 minutes ago      Exited              kube-scheduler            0                   59c7ac5354001       kube-scheduler-embed-certs-031687
	
	
	==> coredns [3a1ef2c41922] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	[INFO] Reloading complete
	[INFO] 127.0.0.1:46583 - 57672 "HINFO IN 4837871372873753732.8949169030992615212. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012772655s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [65f801ea60ac] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50261 - 15654 "HINFO IN 6548381319171350955.8783735724164066773. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.415128803s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-031687
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-031687
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=embed-certs-031687
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_05_23_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:05:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-031687
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:25:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:22:42 +0000   Mon, 29 Sep 2025 12:05:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:22:42 +0000   Mon, 29 Sep 2025 12:05:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:22:42 +0000   Mon, 29 Sep 2025 12:05:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:22:42 +0000   Mon, 29 Sep 2025 12:05:20 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    embed-certs-031687
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3db81bd2691471cb1038dba05261875
	  System UUID:                bc2311f8-9925-42fe-a1ac-db9ee40b62fe
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-h49hh                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-embed-certs-031687                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kube-apiserver-embed-certs-031687             250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-embed-certs-031687    200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-8lx97                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-embed-certs-031687             100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-w5slh               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-77hqb    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-l9zp7         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node embed-certs-031687 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node embed-certs-031687 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node embed-certs-031687 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node embed-certs-031687 event: Registered Node embed-certs-031687 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-031687 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-031687 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-031687 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                node-controller  Node embed-certs-031687 event: Registered Node embed-certs-031687 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	[Sep29 12:24] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 8f 0c 17 b8 91 08 06
	[Sep29 12:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 5f 3c 17 4f d8 08 06
	
	
	==> etcd [27f5ea637472] <==
	{"level":"warn","ts":"2025-09-29T12:06:33.703824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.710419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.716966Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36084","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.725020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.731313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.737427Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.743333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.749548Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.761756Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.763617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.769866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.775868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.782263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.788312Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36282","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.794109Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36300","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.806295Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.812945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:33.818960Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36372","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:16:33.379576Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1026}
	{"level":"info","ts":"2025-09-29T12:16:33.399059Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1026,"took":"19.089399ms","hash":3790831538,"current-db-size-bytes":3145728,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1232896,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2025-09-29T12:16:33.399126Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3790831538,"revision":1026,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T12:21:33.384790Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1281}
	{"level":"info","ts":"2025-09-29T12:21:33.387571Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1281,"took":"2.445138ms","hash":23069022,"current-db-size-bytes":3145728,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1789952,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T12:21:33.387609Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":23069022,"revision":1281,"compact-revision":1026}
	{"level":"info","ts":"2025-09-29T12:24:15.797979Z","caller":"traceutil/trace.go:172","msg":"trace[23203969] transaction","detail":"{read_only:false; response_revision:1679; number_of_response:1; }","duration":"128.645437ms","start":"2025-09-29T12:24:15.669316Z","end":"2025-09-29T12:24:15.797961Z","steps":["trace[23203969] 'process raft request'  (duration: 127.980973ms)"],"step_count":1}
	
	
	==> etcd [9c9d110cd230] <==
	{"level":"info","ts":"2025-09-29T12:05:20.253137Z","caller":"traceutil/trace.go:172","msg":"trace[438781435] transaction","detail":"{read_only:false; response_revision:16; number_of_response:1; }","duration":"216.614403ms","start":"2025-09-29T12:05:20.036516Z","end":"2025-09-29T12:05:20.253131Z","steps":["trace[438781435] 'process raft request'  (duration: 216.108114ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.253177Z","caller":"traceutil/trace.go:172","msg":"trace[178786943] transaction","detail":"{read_only:false; response_revision:19; number_of_response:1; }","duration":"216.076305ms","start":"2025-09-29T12:05:20.037091Z","end":"2025-09-29T12:05:20.253167Z","steps":["trace[178786943] 'process raft request'  (duration: 215.721542ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.253177Z","caller":"traceutil/trace.go:172","msg":"trace[2052007329] transaction","detail":"{read_only:false; response_revision:18; number_of_response:1; }","duration":"216.315627ms","start":"2025-09-29T12:05:20.036851Z","end":"2025-09-29T12:05:20.253166Z","steps":["trace[2052007329] 'process raft request'  (duration: 215.856225ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-29T12:05:20.315571Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.503296ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-09-29T12:05:20.315644Z","caller":"traceutil/trace.go:172","msg":"trace[1802507612] range","detail":"{range_begin:/registry/limitranges; range_end:; response_count:0; response_revision:20; }","duration":"101.589942ms","start":"2025-09-29T12:05:20.214042Z","end":"2025-09-29T12:05:20.315632Z","steps":["trace[1802507612] 'agreement among raft nodes before linearized reading'  (duration: 99.529105ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.316203Z","caller":"traceutil/trace.go:172","msg":"trace[590188282] transaction","detail":"{read_only:false; response_revision:21; number_of_response:1; }","duration":"144.624881ms","start":"2025-09-29T12:05:20.171567Z","end":"2025-09-29T12:05:20.316192Z","steps":["trace[590188282] 'process raft request'  (duration: 142.020353ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:05:20.316481Z","caller":"traceutil/trace.go:172","msg":"trace[2129118222] transaction","detail":"{read_only:false; response_revision:22; number_of_response:1; }","duration":"140.057316ms","start":"2025-09-29T12:05:20.176413Z","end":"2025-09-29T12:05:20.316470Z","steps":["trace[2129118222] 'process raft request'  (duration: 139.345283ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:06:12.985194Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:06:12.985271Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-031687","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:06:12.985362Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:19.987502Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:19.988725Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.988786Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-09-29T12:06:19.988846Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988841Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988905Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:06:19.988928Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.988887Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988834Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:19.988967Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T12:06:19.988984Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.990811Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-09-29T12:06:19.990897Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:19.990936Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-29T12:06:19.990951Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-031687","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 12:25:16 up  2:07,  0 users,  load average: 1.31, 0.89, 1.56
	Linux embed-certs-031687 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [312c71e7e109] <==
	I0929 12:21:35.318262       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:22:18.866557       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:22:32.381609       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:22:35.317400       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:22:35.317450       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:22:35.317464       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:22:35.318553       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:22:35.318679       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:22:35.318694       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:23:40.885284       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:23:47.046957       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:24:35.317613       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:24:35.317669       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:24:35.317684       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:24:35.319740       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:24:35.319839       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:24:35.319902       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:24:54.553116       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:25:00.818660       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [f99b1cd1736c] <==
	W0929 12:06:22.173248       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.225623       1 logging.go:55] [core] [Channel #55 SubChannel #57]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.250487       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.260995       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.321634       1 logging.go:55] [core] [Channel #71 SubChannel #73]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.322890       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.326206       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.326466       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.442261       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.483047       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.522776       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.563322       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.625065       1 logging.go:55] [core] [Channel #231 SubChannel #233]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.628446       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.641087       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.695925       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.701399       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.713117       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.720508       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.723869       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.744293       1 logging.go:55] [core] [Channel #135 SubChannel #137]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.778299       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.872752       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.907850       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:22.929310       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [0f7e04b4b32c] <==
	I0929 12:05:26.847273       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:05:26.847121       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:05:26.847135       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 12:05:26.847105       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:05:26.847793       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0929 12:05:26.848253       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:05:26.848648       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:05:26.848934       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 12:05:26.849937       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 12:05:26.850787       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-031687"
	I0929 12:05:26.850836       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 12:05:26.850004       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 12:05:26.853527       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 12:05:26.853608       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 12:05:26.857657       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:05:26.858247       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:26.861353       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:05:26.864042       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:26.868711       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:05:26.876993       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 12:05:26.885414       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:26.885549       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0929 12:05:26.896330       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:26.896353       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:05:26.896362       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [916456bc8bfb] <==
	I0929 12:19:07.873685       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:19:37.795337       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:19:37.879832       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:20:07.799772       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:20:07.886996       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:20:37.805476       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:20:37.893631       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:21:07.810082       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:21:07.900984       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:21:37.814761       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:21:37.908724       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:22:07.818633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:22:07.916488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:22:37.823900       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:22:37.924041       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:23:07.827833       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:23:07.931456       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:23:37.832192       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:23:37.938372       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:24:07.836478       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:24:07.947103       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:24:37.841349       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:24:37.954676       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:25:07.846832       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:25:07.963421       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [45741390c4ac] <==
	I0929 12:06:35.626310       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:06:35.678839       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:06:35.778998       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:06:35.779050       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 12:06:35.779224       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:06:35.809789       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:06:35.809858       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:06:35.815781       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:06:35.816158       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:06:35.816189       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:35.817699       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:06:35.817726       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:06:35.817754       1 config.go:200] "Starting service config controller"
	I0929 12:06:35.817758       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:06:35.817776       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:06:35.817781       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:06:35.818059       1 config.go:309] "Starting node config controller"
	I0929 12:06:35.818074       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:06:35.818080       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:06:35.917829       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:06:35.917843       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:06:35.917890       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [b0b17b7d5527] <==
	I0929 12:05:28.818009       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:05:28.893572       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:05:28.993751       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:05:28.993806       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 12:05:28.994987       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:05:29.041005       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:29.041343       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:05:29.050581       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:05:29.050932       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:05:29.050972       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:29.053129       1 config.go:200] "Starting service config controller"
	I0929 12:05:29.053596       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:05:29.053638       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:05:29.053681       1 config.go:309] "Starting node config controller"
	I0929 12:05:29.053697       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:05:29.053704       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:05:29.053601       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:05:29.053551       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:05:29.054395       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:05:29.154177       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:05:29.155321       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:05:29.155345       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [468b88a7167c] <==
	I0929 12:06:33.005670       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:06:34.259096       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:06:34.259130       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:06:34.259142       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:06:34.259151       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:06:34.291389       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:06:34.291422       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:34.302889       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:34.302927       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:34.303634       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:06:34.304134       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 12:06:34.306186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 12:06:34.306321       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:06:34.306427       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:06:34.306595       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:06:34.306663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:06:34.308951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:06:34.308955       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:06:34.309120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I0929 12:06:34.403997       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [90223f818ad9] <==
	E0929 12:05:19.878776       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:19.879262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:05:19.879317       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:19.879326       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:19.879408       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:05:19.879413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:19.879442       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:19.879997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 12:05:19.880055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:20.787690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:20.796703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:20.797675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 12:05:20.828729       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 12:05:20.938370       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:21.036990       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:21.144021       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:21.199845       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:21.216120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0929 12:05:23.571597       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:12.996246       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:12.996364       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:06:12.997894       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:06:12.998708       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:06:12.998719       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:06:12.998740       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 12:23:25 embed-certs-031687 kubelet[1366]: E0929 12:23:25.906722    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:23:32 embed-certs-031687 kubelet[1366]: E0929 12:23:32.906327    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:23:37 embed-certs-031687 kubelet[1366]: E0929 12:23:37.912148    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:23:38 embed-certs-031687 kubelet[1366]: E0929 12:23:38.906624    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:23:45 embed-certs-031687 kubelet[1366]: E0929 12:23:45.906800    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:23:49 embed-certs-031687 kubelet[1366]: E0929 12:23:49.906159    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:23:52 embed-certs-031687 kubelet[1366]: E0929 12:23:52.906381    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:24:00 embed-certs-031687 kubelet[1366]: E0929 12:24:00.907017    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:24:03 embed-certs-031687 kubelet[1366]: E0929 12:24:03.914951    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:24:07 embed-certs-031687 kubelet[1366]: E0929 12:24:07.912607    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:24:13 embed-certs-031687 kubelet[1366]: E0929 12:24:13.906611    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:24:14 embed-certs-031687 kubelet[1366]: E0929 12:24:14.906997    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:24:21 embed-certs-031687 kubelet[1366]: E0929 12:24:21.906602    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:24:27 embed-certs-031687 kubelet[1366]: E0929 12:24:27.906288    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:24:28 embed-certs-031687 kubelet[1366]: E0929 12:24:28.907030    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:24:34 embed-certs-031687 kubelet[1366]: E0929 12:24:34.906511    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:24:39 embed-certs-031687 kubelet[1366]: E0929 12:24:39.907156    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:24:42 embed-certs-031687 kubelet[1366]: E0929 12:24:42.906540    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:24:48 embed-certs-031687 kubelet[1366]: E0929 12:24:48.906434    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:24:53 embed-certs-031687 kubelet[1366]: E0929 12:24:53.906802    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:24:56 embed-certs-031687 kubelet[1366]: E0929 12:24:56.905912    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:25:03 embed-certs-031687 kubelet[1366]: E0929 12:25:03.906428    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	Sep 29 12:25:04 embed-certs-031687 kubelet[1366]: E0929 12:25:04.906860    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-l9zp7" podUID="3644e7d0-9ed1-4318-b46e-d6c46932ae65"
	Sep 29 12:25:07 embed-certs-031687 kubelet[1366]: E0929 12:25:07.906716    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-77hqb" podUID="aef63d5e-86de-46d0-ad75-f9800545e9dd"
	Sep 29 12:25:15 embed-certs-031687 kubelet[1366]: E0929 12:25:15.906553    1366 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-w5slh" podUID="f4b93e5c-6c5e-4b2e-a390-b5ed49063ff5"
	
	
	==> storage-provisioner [7cfd570c5c36] <==
	W0929 12:24:50.645912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:52.648819       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:52.652935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:54.656032       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:54.661752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:56.665432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:56.671153       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:58.674993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:24:58.681360       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:00.685005       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:00.689445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:02.695149       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:02.701782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:04.706138       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:04.710782       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:06.715241       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:06.720332       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:08.723587       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:08.728034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:10.730947       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:10.736519       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:12.740482       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:12.744565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:14.747468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:14.751381       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cd9c371dd739] <==
	I0929 12:06:35.573515       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:07:05.575382       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031687 -n embed-certs-031687
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-031687 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-031687 describe pod metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-031687 describe pod metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7: exit status 1 (58.967322ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-w5slh" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-77hqb" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-l9zp7" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-031687 describe pod metrics-server-746fcd58dc-w5slh dashboard-metrics-scraper-6ffb444bf9-77hqb kubernetes-dashboard-855c9754f9-l9zp7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.38s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-5bdqx" [d037c2d3-033d-420d-b665-eef2dd2e36bd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 12:17:03.042400  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:17:51.164258  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:18:38.762776  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:19:20.246118  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:19:20.588201  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:19:25.295112  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:19:46.414771  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:19:48.289155  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:19:53.819092  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:20:15.768802  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:21:09.479978  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:21:17.326561  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:21:38.833771  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:22:03.042128  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:22:40.393343  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:22:51.164646  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:23:26.104345  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/false-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:23:38.762491  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/bridge-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-306088 -n no-preload-306088
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 12:25:25.696514683 +0000 UTC m=+4407.740326766
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-306088 describe po kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-306088 describe po kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-5bdqx
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-306088/192.168.94.2
Start Time:       Mon, 29 Sep 2025 12:06:50 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ch8sn (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-ch8sn:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx to no-preload-306088
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m32s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m32s (x64 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-306088 logs kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-306088 logs kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard: exit status 1 (67.874472ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-5bdqx" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-306088 logs kubernetes-dashboard-855c9754f9-5bdqx -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-306088 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-306088
helpers_test.go:243: (dbg) docker inspect no-preload-306088:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831",
	        "Created": "2025-09-29T12:05:02.667478034Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 871291,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T12:06:36.757597432Z",
	            "FinishedAt": "2025-09-29T12:06:35.903235818Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/hostname",
	        "HostsPath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/hosts",
	        "LogPath": "/var/lib/docker/containers/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831/0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831-json.log",
	        "Name": "/no-preload-306088",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "no-preload-306088:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-306088",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0f0cd5d8dce415eecacb16912de36ff517c848f5a4d4ff804f2b67be3cd53831",
	                "LowerDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443-init/diff:/var/lib/docker/overlay2/e319d2e06e0d69cee9f4fe36792c5be9fd81a6b5fefed685a6f698ba1303cb61/diff",
	                "MergedDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443/merged",
	                "UpperDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443/diff",
	                "WorkDir": "/var/lib/docker/overlay2/da25e1a08de11f6554acb2af0426af72b3ab8cb476b88a9f86451aa041390443/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-306088",
	                "Source": "/var/lib/docker/volumes/no-preload-306088/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-306088",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-306088",
	                "name.minikube.sigs.k8s.io": "no-preload-306088",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "822fcef146208b224a6c528e2a9c025368dead1e675b20806d784d7d4441cf14",
	            "SandboxKey": "/var/run/docker/netns/822fcef14620",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33523"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33524"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33527"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33525"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33526"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-306088": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "fa:52:f0:1a:bb:5f",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d4ca4f1377a2f0c0999137059d5401179046ae6f170d7c85e62172b83a4ca5f9",
	                    "EndpointID": "44ede35c9e6699d3227d04b88944365e41464e7493fbda822be9e4cfdf17738f",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-306088",
	                        "0f0cd5d8dce4"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-306088 -n no-preload-306088
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-306088 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-306088 logs -n 25: (1.034374068s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ image   │ old-k8s-version-858855 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ pause   │ -p old-k8s-version-858855 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ unpause │ -p old-k8s-version-858855 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ delete  │ -p old-k8s-version-858855                                                                                                                                                                                                                       │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ delete  │ -p old-k8s-version-858855                                                                                                                                                                                                                       │ old-k8s-version-858855       │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ start   │ -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ addons  │ enable metrics-server -p newest-cni-979136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ stop    │ -p newest-cni-979136 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ addons  │ enable dashboard -p newest-cni-979136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:24 UTC │
	│ start   │ -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:24 UTC │ 29 Sep 25 12:25 UTC │
	│ image   │ default-k8s-diff-port-414542 image list --format=json                                                                                                                                                                                           │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ image   │ newest-cni-979136 image list --format=json                                                                                                                                                                                                      │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ pause   │ -p default-k8s-diff-port-414542 --alsologtostderr -v=1                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ pause   │ -p newest-cni-979136 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ unpause │ -p default-k8s-diff-port-414542 --alsologtostderr -v=1                                                                                                                                                                                          │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ unpause │ -p newest-cni-979136 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p default-k8s-diff-port-414542                                                                                                                                                                                                                 │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p newest-cni-979136                                                                                                                                                                                                                            │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p default-k8s-diff-port-414542                                                                                                                                                                                                                 │ default-k8s-diff-port-414542 │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p newest-cni-979136                                                                                                                                                                                                                            │ newest-cni-979136            │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ image   │ embed-certs-031687 image list --format=json                                                                                                                                                                                                     │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ pause   │ -p embed-certs-031687 --alsologtostderr -v=1                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ unpause │ -p embed-certs-031687 --alsologtostderr -v=1                                                                                                                                                                                                    │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p embed-certs-031687                                                                                                                                                                                                                           │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	│ delete  │ -p embed-certs-031687                                                                                                                                                                                                                           │ embed-certs-031687           │ jenkins │ v1.37.0 │ 29 Sep 25 12:25 UTC │ 29 Sep 25 12:25 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 12:24:51
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 12:24:51.027836  905649 out.go:360] Setting OutFile to fd 1 ...
	I0929 12:24:51.028162  905649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.028175  905649 out.go:374] Setting ErrFile to fd 2...
	I0929 12:24:51.028179  905649 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 12:24:51.028374  905649 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 12:24:51.029337  905649 out.go:368] Setting JSON to false
	I0929 12:24:51.030825  905649 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7635,"bootTime":1759141056,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 12:24:51.030968  905649 start.go:140] virtualization: kvm guest
	I0929 12:24:51.032783  905649 out.go:179] * [newest-cni-979136] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 12:24:51.034019  905649 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 12:24:51.034055  905649 notify.go:220] Checking for updates...
	I0929 12:24:51.036459  905649 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 12:24:51.037859  905649 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:51.039082  905649 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 12:24:51.040311  905649 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 12:24:51.041587  905649 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 12:24:51.043195  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:51.043728  905649 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 12:24:51.068175  905649 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 12:24:51.068255  905649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:51.123146  905649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:24:51.112794792 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:51.123257  905649 docker.go:318] overlay module found
	I0929 12:24:51.125091  905649 out.go:179] * Using the docker driver based on existing profile
	I0929 12:24:51.126326  905649 start.go:304] selected driver: docker
	I0929 12:24:51.126339  905649 start.go:924] validating driver "docker" against &{Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirat
ion:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:51.126430  905649 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 12:24:51.127121  905649 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 12:24:51.186671  905649 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 12:24:51.176838416 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 12:24:51.187052  905649 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 12:24:51.187093  905649 cni.go:84] Creating CNI manager for ""
	I0929 12:24:51.187164  905649 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:24:51.187225  905649 start.go:348] cluster config:
	{Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mou
ntMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:51.189168  905649 out.go:179] * Starting "newest-cni-979136" primary control-plane node in "newest-cni-979136" cluster
	I0929 12:24:51.190349  905649 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 12:24:51.192465  905649 out.go:179] * Pulling base image v0.0.48 ...
	I0929 12:24:51.193503  905649 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:24:51.193547  905649 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 12:24:51.193547  905649 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
	I0929 12:24:51.193585  905649 cache.go:58] Caching tarball of preloaded images
	I0929 12:24:51.193693  905649 preload.go:172] Found /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0929 12:24:51.193704  905649 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 12:24:51.193824  905649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/config.json ...
	I0929 12:24:51.214508  905649 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 12:24:51.214530  905649 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 12:24:51.214551  905649 cache.go:232] Successfully downloaded all kic artifacts
	I0929 12:24:51.214581  905649 start.go:360] acquireMachinesLock for newest-cni-979136: {Name:mkc9e89421b142ce40f5cb759383c5450ffdf976 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 12:24:51.214640  905649 start.go:364] duration metric: took 37.274µs to acquireMachinesLock for "newest-cni-979136"
	I0929 12:24:51.214660  905649 start.go:96] Skipping create...Using existing machine configuration
	I0929 12:24:51.214665  905649 fix.go:54] fixHost starting: 
	I0929 12:24:51.214885  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:51.232065  905649 fix.go:112] recreateIfNeeded on newest-cni-979136: state=Stopped err=<nil>
	W0929 12:24:51.232092  905649 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 12:24:51.234018  905649 out.go:252] * Restarting existing docker container for "newest-cni-979136" ...
	I0929 12:24:51.234081  905649 cli_runner.go:164] Run: docker start newest-cni-979136
	I0929 12:24:51.475044  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:51.494168  905649 kic.go:430] container "newest-cni-979136" state is running.
	I0929 12:24:51.494681  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:51.514623  905649 profile.go:143] Saving config to /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/config.json ...
	I0929 12:24:51.514852  905649 machine.go:93] provisionDockerMachine start ...
	I0929 12:24:51.514945  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:51.533238  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:51.533491  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:51.533504  905649 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 12:24:51.534277  905649 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:55270->127.0.0.1:33533: read: connection reset by peer
	I0929 12:24:54.676970  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-979136
	
	I0929 12:24:54.677005  905649 ubuntu.go:182] provisioning hostname "newest-cni-979136"
	I0929 12:24:54.677081  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:54.695975  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:54.696244  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:54.696263  905649 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-979136 && echo "newest-cni-979136" | sudo tee /etc/hostname
	I0929 12:24:54.848177  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-979136
	
	I0929 12:24:54.848263  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:54.868568  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:54.868809  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:54.868828  905649 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-979136' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-979136/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-979136' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 12:24:55.006440  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:24:55.006486  905649 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21655-357219/.minikube CaCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21655-357219/.minikube}
	I0929 12:24:55.006506  905649 ubuntu.go:190] setting up certificates
	I0929 12:24:55.006518  905649 provision.go:84] configureAuth start
	I0929 12:24:55.006580  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:55.025054  905649 provision.go:143] copyHostCerts
	I0929 12:24:55.025121  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem, removing ...
	I0929 12:24:55.025140  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem
	I0929 12:24:55.025215  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/ca.pem (1082 bytes)
	I0929 12:24:55.025317  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem, removing ...
	I0929 12:24:55.025326  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem
	I0929 12:24:55.025353  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/cert.pem (1123 bytes)
	I0929 12:24:55.025420  905649 exec_runner.go:144] found /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem, removing ...
	I0929 12:24:55.025427  905649 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem
	I0929 12:24:55.025450  905649 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21655-357219/.minikube/key.pem (1675 bytes)
	I0929 12:24:55.025513  905649 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem org=jenkins.newest-cni-979136 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-979136]
	I0929 12:24:55.243153  905649 provision.go:177] copyRemoteCerts
	I0929 12:24:55.243218  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 12:24:55.243264  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.263249  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:55.364291  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0929 12:24:55.389609  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 12:24:55.415500  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 12:24:55.440533  905649 provision.go:87] duration metric: took 434.000782ms to configureAuth
	I0929 12:24:55.440563  905649 ubuntu.go:206] setting minikube options for container-runtime
	I0929 12:24:55.440758  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:55.440818  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.460318  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.460729  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.460755  905649 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 12:24:55.597583  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 12:24:55.597610  905649 ubuntu.go:71] root file system type: overlay
	I0929 12:24:55.597747  905649 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 12:24:55.597807  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.619201  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.619420  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.619486  905649 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 12:24:55.771605  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 12:24:55.771704  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.790052  905649 main.go:141] libmachine: Using SSH client type: native
	I0929 12:24:55.790282  905649 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33533 <nil> <nil>}
	I0929 12:24:55.790300  905649 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 12:24:55.932301  905649 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 12:24:55.932334  905649 machine.go:96] duration metric: took 4.417466701s to provisionDockerMachine
	I0929 12:24:55.932351  905649 start.go:293] postStartSetup for "newest-cni-979136" (driver="docker")
	I0929 12:24:55.932365  905649 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 12:24:55.932465  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 12:24:55.932550  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:55.954244  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.052000  905649 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 12:24:56.055711  905649 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 12:24:56.055754  905649 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 12:24:56.055765  905649 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 12:24:56.055774  905649 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 12:24:56.055787  905649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/addons for local assets ...
	I0929 12:24:56.055831  905649 filesync.go:126] Scanning /home/jenkins/minikube-integration/21655-357219/.minikube/files for local assets ...
	I0929 12:24:56.055972  905649 filesync.go:149] local asset: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem -> 3607822.pem in /etc/ssl/certs
	I0929 12:24:56.056075  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 12:24:56.065385  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:24:56.090181  905649 start.go:296] duration metric: took 157.792312ms for postStartSetup
	I0929 12:24:56.090268  905649 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 12:24:56.090315  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.109744  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.202986  905649 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 12:24:56.207666  905649 fix.go:56] duration metric: took 4.992992519s for fixHost
	I0929 12:24:56.207696  905649 start.go:83] releasing machines lock for "newest-cni-979136", held for 4.993042953s
	I0929 12:24:56.207761  905649 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-979136
	I0929 12:24:56.225816  905649 ssh_runner.go:195] Run: cat /version.json
	I0929 12:24:56.225856  905649 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 12:24:56.225890  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.225953  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:56.243859  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.245388  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:56.410148  905649 ssh_runner.go:195] Run: systemctl --version
	I0929 12:24:56.415184  905649 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 12:24:56.419735  905649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 12:24:56.439126  905649 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 12:24:56.439194  905649 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 12:24:56.448391  905649 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 12:24:56.448426  905649 start.go:495] detecting cgroup driver to use...
	I0929 12:24:56.448461  905649 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:24:56.448625  905649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:24:56.465656  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 12:24:56.476251  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 12:24:56.486622  905649 containerd.go:146] configuring containerd to use "systemd" as cgroup driver...
	I0929 12:24:56.486697  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = true|g' /etc/containerd/config.toml"
	I0929 12:24:56.497049  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:56.507303  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 12:24:56.517167  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 12:24:56.527790  905649 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 12:24:56.537523  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 12:24:56.548028  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 12:24:56.558377  905649 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 12:24:56.568281  905649 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 12:24:56.577443  905649 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 12:24:56.586851  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:56.660866  905649 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 12:24:56.741769  905649 start.go:495] detecting cgroup driver to use...
	I0929 12:24:56.741823  905649 detect.go:190] detected "systemd" cgroup driver on host os
	I0929 12:24:56.741899  905649 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 12:24:56.755292  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:24:56.767224  905649 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 12:24:56.786855  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 12:24:56.799497  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 12:24:56.811529  905649 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 12:24:56.829453  905649 ssh_runner.go:195] Run: which cri-dockerd
	I0929 12:24:56.833521  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 12:24:56.842646  905649 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 12:24:56.860977  905649 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 12:24:56.931377  905649 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 12:24:57.001000  905649 docker.go:575] configuring docker to use "systemd" as cgroup driver...
	I0929 12:24:57.001140  905649 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (129 bytes)
	I0929 12:24:57.020740  905649 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 12:24:57.032094  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:57.102971  905649 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 12:24:57.943232  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 12:24:57.958776  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 12:24:57.970760  905649 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 12:24:57.983315  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:24:57.994666  905649 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 12:24:58.061628  905649 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 12:24:58.131372  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.196002  905649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 12:24:58.216042  905649 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 12:24:58.227496  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.296813  905649 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 12:24:58.382030  905649 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 12:24:58.396219  905649 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 12:24:58.396294  905649 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 12:24:58.400678  905649 start.go:563] Will wait 60s for crictl version
	I0929 12:24:58.400758  905649 ssh_runner.go:195] Run: which crictl
	I0929 12:24:58.404435  905649 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 12:24:58.440974  905649 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 12:24:58.441049  905649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:24:58.466313  905649 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 12:24:58.495007  905649 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 12:24:58.495109  905649 cli_runner.go:164] Run: docker network inspect newest-cni-979136 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 12:24:58.513187  905649 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I0929 12:24:58.517404  905649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:24:58.531305  905649 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0929 12:24:58.532547  905649 kubeadm.go:875] updating cluster {Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
untString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 12:24:58.532682  905649 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 12:24:58.532746  905649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:24:58.553550  905649 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:24:58.553578  905649 docker.go:621] Images already preloaded, skipping extraction
	I0929 12:24:58.553660  905649 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 12:24:58.574817  905649 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 12:24:58.574850  905649 cache_images.go:85] Images are preloaded, skipping loading
	I0929 12:24:58.574864  905649 kubeadm.go:926] updating node { 192.168.103.2 8443 v1.34.0 docker true true} ...
	I0929 12:24:58.575035  905649 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-979136 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 12:24:58.575101  905649 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 12:24:58.629742  905649 cni.go:84] Creating CNI manager for ""
	I0929 12:24:58.629778  905649 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 12:24:58.629793  905649 kubeadm.go:84] Using pod CIDR: 10.42.0.0/16
	I0929 12:24:58.629820  905649 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-979136 NodeName:newest-cni-979136 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 12:24:58.630059  905649 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-979136"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 12:24:58.630139  905649 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 12:24:58.640481  905649 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 12:24:58.640539  905649 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 12:24:58.650199  905649 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0929 12:24:58.670388  905649 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 12:24:58.690755  905649 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0929 12:24:58.710213  905649 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I0929 12:24:58.714041  905649 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 12:24:58.726275  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:58.797764  905649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:24:58.820648  905649 certs.go:68] Setting up /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136 for IP: 192.168.103.2
	I0929 12:24:58.820678  905649 certs.go:194] generating shared ca certs ...
	I0929 12:24:58.820699  905649 certs.go:226] acquiring lock for ca certs: {Name:mkaa9c7bafe883ae5443007576feacd67d22be0a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:58.820926  905649 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key
	I0929 12:24:58.820988  905649 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key
	I0929 12:24:58.821002  905649 certs.go:256] generating profile certs ...
	I0929 12:24:58.821111  905649 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/client.key
	I0929 12:24:58.821198  905649 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.key.d397cfea
	I0929 12:24:58.821246  905649 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.key
	I0929 12:24:58.821404  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem (1338 bytes)
	W0929 12:24:58.821450  905649 certs.go:480] ignoring /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782_empty.pem, impossibly tiny 0 bytes
	I0929 12:24:58.821464  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 12:24:58.821501  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/ca.pem (1082 bytes)
	I0929 12:24:58.821531  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/cert.pem (1123 bytes)
	I0929 12:24:58.821564  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/certs/key.pem (1675 bytes)
	I0929 12:24:58.821615  905649 certs.go:484] found cert: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem (1708 bytes)
	I0929 12:24:58.824178  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 12:24:58.854835  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0929 12:24:58.885381  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 12:24:58.922169  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 12:24:58.954035  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 12:24:58.984832  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 12:24:59.010911  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 12:24:59.038716  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/newest-cni-979136/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 12:24:59.066074  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/ssl/certs/3607822.pem --> /usr/share/ca-certificates/3607822.pem (1708 bytes)
	I0929 12:24:59.092081  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 12:24:59.117971  905649 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21655-357219/.minikube/certs/360782.pem --> /usr/share/ca-certificates/360782.pem (1338 bytes)
	I0929 12:24:59.144530  905649 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 12:24:59.163121  905649 ssh_runner.go:195] Run: openssl version
	I0929 12:24:59.168833  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 12:24:59.178922  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.182635  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 11:12 /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.182700  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 12:24:59.189919  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 12:24:59.201241  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/360782.pem && ln -fs /usr/share/ca-certificates/360782.pem /etc/ssl/certs/360782.pem"
	I0929 12:24:59.211375  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.215068  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 11:17 /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.215127  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/360782.pem
	I0929 12:24:59.222147  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/360782.pem /etc/ssl/certs/51391683.0"
	I0929 12:24:59.231678  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3607822.pem && ln -fs /usr/share/ca-certificates/3607822.pem /etc/ssl/certs/3607822.pem"
	I0929 12:24:59.242049  905649 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.246376  905649 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 11:17 /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.246428  905649 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3607822.pem
	I0929 12:24:59.253659  905649 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3607822.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 12:24:59.263390  905649 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 12:24:59.267282  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 12:24:59.274371  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 12:24:59.281316  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 12:24:59.288070  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 12:24:59.295169  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 12:24:59.302222  905649 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 12:24:59.309049  905649 kubeadm.go:392] StartCluster: {Name:newest-cni-979136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:newest-cni-979136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount
String: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 12:24:59.309197  905649 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 12:24:59.329631  905649 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 12:24:59.340419  905649 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 12:24:59.340443  905649 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 12:24:59.340499  905649 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 12:24:59.352342  905649 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 12:24:59.354702  905649 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-979136" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:59.355829  905649 kubeconfig.go:62] /home/jenkins/minikube-integration/21655-357219/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-979136" cluster setting kubeconfig missing "newest-cni-979136" context setting]
	I0929 12:24:59.356906  905649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.358824  905649 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 12:24:59.369732  905649 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.103.2
	I0929 12:24:59.369771  905649 kubeadm.go:593] duration metric: took 29.321487ms to restartPrimaryControlPlane
	I0929 12:24:59.369786  905649 kubeadm.go:394] duration metric: took 60.74854ms to StartCluster
	I0929 12:24:59.369807  905649 settings.go:142] acquiring lock: {Name:mk45813560b141d77d9a411f0986268ea674b64f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.370000  905649 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 12:24:59.372304  905649 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21655-357219/kubeconfig: {Name:mk4eb56c3ae116751e9496bc03bed315498c1f2f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 12:24:59.372523  905649 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 12:24:59.372601  905649 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 12:24:59.372719  905649 addons.go:69] Setting storage-provisioner=true in profile "newest-cni-979136"
	I0929 12:24:59.372746  905649 addons.go:238] Setting addon storage-provisioner=true in "newest-cni-979136"
	I0929 12:24:59.372756  905649 addons.go:69] Setting default-storageclass=true in profile "newest-cni-979136"
	I0929 12:24:59.372756  905649 addons.go:69] Setting metrics-server=true in profile "newest-cni-979136"
	W0929 12:24:59.372774  905649 addons.go:247] addon storage-provisioner should already be in state true
	I0929 12:24:59.372785  905649 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-979136"
	I0929 12:24:59.372787  905649 addons.go:238] Setting addon metrics-server=true in "newest-cni-979136"
	I0929 12:24:59.372774  905649 addons.go:69] Setting dashboard=true in profile "newest-cni-979136"
	I0929 12:24:59.372811  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.372828  905649 addons.go:238] Setting addon dashboard=true in "newest-cni-979136"
	W0929 12:24:59.372841  905649 addons.go:247] addon dashboard should already be in state true
	I0929 12:24:59.372868  905649 config.go:182] Loaded profile config "newest-cni-979136": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 12:24:59.372907  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	W0929 12:24:59.372798  905649 addons.go:247] addon metrics-server should already be in state true
	I0929 12:24:59.372999  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.373193  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373362  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373382  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.373688  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.374952  905649 out.go:179] * Verifying Kubernetes components...
	I0929 12:24:59.377094  905649 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 12:24:59.406520  905649 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 12:24:59.408932  905649 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:24:59.408962  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 12:24:59.409032  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.410909  905649 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 12:24:59.410960  905649 addons.go:238] Setting addon default-storageclass=true in "newest-cni-979136"
	W0929 12:24:59.411621  905649 addons.go:247] addon default-storageclass should already be in state true
	I0929 12:24:59.411678  905649 host.go:66] Checking if "newest-cni-979136" exists ...
	I0929 12:24:59.412421  905649 cli_runner.go:164] Run: docker container inspect newest-cni-979136 --format={{.State.Status}}
	I0929 12:24:59.412644  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 12:24:59.413305  905649 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 12:24:59.413569  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.412765  905649 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 12:24:59.415132  905649 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 12:24:59.417291  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 12:24:59.417368  905649 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 12:24:59.417470  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.445215  905649 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 12:24:59.446126  905649 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 12:24:59.446304  905649 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-979136
	I0929 12:24:59.450121  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.452948  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.463761  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.473057  905649 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33533 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/newest-cni-979136/id_rsa Username:docker}
	I0929 12:24:59.521104  905649 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 12:24:59.559358  905649 api_server.go:52] waiting for apiserver process to appear ...
	I0929 12:24:59.559440  905649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:24:59.588555  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 12:24:59.588580  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 12:24:59.590849  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:24:59.595174  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:24:59.596995  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 12:24:59.597012  905649 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 12:24:59.620788  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 12:24:59.620818  905649 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 12:24:59.630246  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 12:24:59.630275  905649 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 12:24:59.657282  905649 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:24:59.657315  905649 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 12:24:59.661119  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 12:24:59.661147  905649 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 12:24:59.685140  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 12:24:59.685170  905649 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	W0929 12:24:59.687157  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.687205  905649 retry.go:31] will retry after 361.728613ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 12:24:59.687243  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.687273  905649 retry.go:31] will retry after 219.336799ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.688567  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:24:59.709373  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 12:24:59.709406  905649 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 12:24:59.740607  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 12:24:59.740643  905649 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 12:24:59.768803  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 12:24:59.768851  905649 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0929 12:24:59.775203  905649 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.775240  905649 retry.go:31] will retry after 332.898484ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 12:24:59.796800  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 12:24:59.796831  905649 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 12:24:59.821588  905649 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:24:59.821619  905649 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 12:24:59.847936  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 12:24:59.907657  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 12:25:00.049522  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 12:25:00.060062  905649 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 12:25:00.108553  905649 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 12:25:01.867904  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.019889375s)
	I0929 12:25:01.867969  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (1.960182947s)
	I0929 12:25:01.870762  905649 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-979136 addons enable metrics-server
	
	I0929 12:25:02.020400  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.970827565s)
	I0929 12:25:02.020504  905649 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (1.960409127s)
	I0929 12:25:02.020541  905649 api_server.go:72] duration metric: took 2.647991456s to wait for apiserver process to appear ...
	I0929 12:25:02.020557  905649 api_server.go:88] waiting for apiserver healthz status ...
	I0929 12:25:02.020579  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:02.020607  905649 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.912007941s)
	I0929 12:25:02.020637  905649 addons.go:479] Verifying addon metrics-server=true in "newest-cni-979136"
	I0929 12:25:02.022202  905649 out.go:179] * Enabled addons: dashboard, default-storageclass, storage-provisioner, metrics-server
	I0929 12:25:02.023704  905649 addons.go:514] duration metric: took 2.651123895s for enable addons: enabled=[dashboard default-storageclass storage-provisioner metrics-server]
	I0929 12:25:02.026548  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:02.026573  905649 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:02.521050  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:02.527087  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:02.527120  905649 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:03.020698  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:03.025513  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 12:25:03.025540  905649 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 12:25:03.521035  905649 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I0929 12:25:03.525634  905649 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I0929 12:25:03.526669  905649 api_server.go:141] control plane version: v1.34.0
	I0929 12:25:03.526694  905649 api_server.go:131] duration metric: took 1.506128439s to wait for apiserver health ...
	I0929 12:25:03.526707  905649 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 12:25:03.530192  905649 system_pods.go:59] 8 kube-system pods found
	I0929 12:25:03.530230  905649 system_pods.go:61] "coredns-66bc5c9577-gk5jp" [5541b17e-3975-4fda-a1e6-4fb4228931c8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 12:25:03.530241  905649 system_pods.go:61] "etcd-newest-cni-979136" [7b81140b-0f04-45e5-af0e-297e6a11f50c] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 12:25:03.530256  905649 system_pods.go:61] "kube-apiserver-newest-cni-979136" [d4665571-5d4f-409f-8d40-88dbd632ab57] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 12:25:03.530267  905649 system_pods.go:61] "kube-controller-manager-newest-cni-979136" [ce7c8835-111b-4bd1-997f-42bc7d8d43a7] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 12:25:03.530276  905649 system_pods.go:61] "kube-proxy-xksn2" [27ad0cb4-d548-4e8d-8d9e-64fad85f4633] Running
	I0929 12:25:03.530284  905649 system_pods.go:61] "kube-scheduler-newest-cni-979136" [832d940b-8c48-4e49-9ebe-ef6dd51b0f02] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 12:25:03.530294  905649 system_pods.go:61] "metrics-server-746fcd58dc-kl6rh" [52b32e3f-94ff-4bbd-aa81-1c106f59614e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 12:25:03.530299  905649 system_pods.go:61] "storage-provisioner" [6a60b242-8cbe-48c8-a86d-03b43412482c] Running
	I0929 12:25:03.530313  905649 system_pods.go:74] duration metric: took 3.599496ms to wait for pod list to return data ...
	I0929 12:25:03.530326  905649 default_sa.go:34] waiting for default service account to be created ...
	I0929 12:25:03.532656  905649 default_sa.go:45] found service account: "default"
	I0929 12:25:03.532682  905649 default_sa.go:55] duration metric: took 2.346917ms for default service account to be created ...
	I0929 12:25:03.532698  905649 kubeadm.go:578] duration metric: took 4.160147155s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0929 12:25:03.532726  905649 node_conditions.go:102] verifying NodePressure condition ...
	I0929 12:25:03.534982  905649 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0929 12:25:03.535004  905649 node_conditions.go:123] node cpu capacity is 8
	I0929 12:25:03.535020  905649 node_conditions.go:105] duration metric: took 2.285053ms to run NodePressure ...
	I0929 12:25:03.535039  905649 start.go:241] waiting for startup goroutines ...
	I0929 12:25:03.535053  905649 start.go:246] waiting for cluster config update ...
	I0929 12:25:03.535070  905649 start.go:255] writing updated cluster config ...
	I0929 12:25:03.535347  905649 ssh_runner.go:195] Run: rm -f paused
	I0929 12:25:03.594962  905649 start.go:623] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0929 12:25:03.597006  905649 out.go:179] * Done! kubectl is now configured to use "newest-cni-979136" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 12:12:34 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:34.530606980Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:12:40 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:40.548105245Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:40 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:40.593524606Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:12:40 no-preload-306088 dockerd[818]: time="2025-09-29T12:12:40.593633400Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:12:40 no-preload-306088 cri-dockerd[1128]: time="2025-09-29T12:12:40Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:17:35 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:35.374074751Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:17:35 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:35.374114370Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:17:35 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:35.376222218Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:17:35 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:35.376255725Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:17:38 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:38.500986279Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:17:38 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:38.529698304Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:17:46 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:46.545167780Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:17:46 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:46.606950841Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:17:46 no-preload-306088 dockerd[818]: time="2025-09-29T12:17:46.607070719Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:17:46 no-preload-306088 cri-dockerd[1128]: time="2025-09-29T12:17:46Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 12:22:43 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:43.501753653Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 12:22:43 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:43.533024976Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 12:22:47 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:47.986516591Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:22:47 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:47.986557608Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:22:47 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:47.988751458Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 12:22:47 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:47.988794988Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host"
	Sep 29 12:22:58 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:58.549466743Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:22:58 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:58.600665422Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 12:22:58 no-preload-306088 dockerd[818]: time="2025-09-29T12:22:58.600785006Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 12:22:58 no-preload-306088 cri-dockerd[1128]: time="2025-09-29T12:22:58Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	6069d4cc945c4       6e38f40d628db                                                                                         17 minutes ago      Running             storage-provisioner       2                   650a45c250449       storage-provisioner
	46b841525a645       56cc512116c8f                                                                                         18 minutes ago      Running             busybox                   1                   af0052ac783cc       busybox
	695a8602bc591       52546a367cc9e                                                                                         18 minutes ago      Running             coredns                   1                   0817cfa6d924e       coredns-66bc5c9577-llrxw
	04de2f2efa331       6e38f40d628db                                                                                         18 minutes ago      Exited              storage-provisioner       1                   650a45c250449       storage-provisioner
	63e413deaec6d       df0860106674d                                                                                         18 minutes ago      Running             kube-proxy                1                   5786a938d52ef       kube-proxy-79hf6
	2e89a50fa22a0       46169d968e920                                                                                         18 minutes ago      Running             kube-scheduler            1                   869508ebc6f7f       kube-scheduler-no-preload-306088
	a85939dbef502       5f1f5298c888d                                                                                         18 minutes ago      Running             etcd                      1                   973a42ce3b13d       etcd-no-preload-306088
	7ede5c29532f1       a0af72f2ec6d6                                                                                         18 minutes ago      Running             kube-controller-manager   1                   3d511beab43f5       kube-controller-manager-no-preload-306088
	9703afde994b8       90550c43ad2bc                                                                                         18 minutes ago      Running             kube-apiserver            1                   209a43e67b76e       kube-apiserver-no-preload-306088
	78749e8a0d6c3       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              busybox                   0                   757705e35ec09       busybox
	d6c5675d0c4db       52546a367cc9e                                                                                         19 minutes ago      Exited              coredns                   0                   7afc0cf80b590       coredns-66bc5c9577-llrxw
	2ed702618e45b       df0860106674d                                                                                         19 minutes ago      Exited              kube-proxy                0                   2f17d84d2ba37       kube-proxy-79hf6
	7a7e42d61c6cf       90550c43ad2bc                                                                                         19 minutes ago      Exited              kube-apiserver            0                   d732eb1833307       kube-apiserver-no-preload-306088
	58da5b85bf37f       a0af72f2ec6d6                                                                                         19 minutes ago      Exited              kube-controller-manager   0                   ea46b63ce01fc       kube-controller-manager-no-preload-306088
	b128aa5b2b94e       5f1f5298c888d                                                                                         19 minutes ago      Exited              etcd                      0                   c4c68bc2d42e1       etcd-no-preload-306088
	ff7fabe12bd91       46169d968e920                                                                                         19 minutes ago      Exited              kube-scheduler            0                   11596b316c317       kube-scheduler-no-preload-306088
	
	
	==> coredns [695a8602bc59] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44581 - 5114 "HINFO IN 1169221059218682807.6276513997277860298. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020548991s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [d6c5675d0c4d] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	[INFO] Reloading complete
	[INFO] 127.0.0.1:39009 - 55424 "HINFO IN 56200610660337702.1748388028457110117. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.017413364s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-306088
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-306088
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e087d081f23c6d1317bb12845422265d8d3490cf
	                    minikube.k8s.io/name=no-preload-306088
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T12_05_34_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 12:05:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-306088
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 12:25:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 12:20:53 +0000   Mon, 29 Sep 2025 12:05:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 12:20:53 +0000   Mon, 29 Sep 2025 12:05:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 12:20:53 +0000   Mon, 29 Sep 2025 12:05:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 12:20:53 +0000   Mon, 29 Sep 2025 12:05:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-306088
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863456Ki
	  pods:               110
	System Info:
	  Machine ID:                 7b538631cbe7481ba166a7b39bb33163
	  System UUID:                e3735703-9e50-4250-a924-a82c25214cd9
	  Boot ID:                    7892f883-017b-40ec-b18f-d6c900a242a7
	  Kernel Version:             6.8.0-1040-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-llrxw                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     19m
	  kube-system                 etcd-no-preload-306088                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         19m
	  kube-system                 kube-apiserver-no-preload-306088              250m (3%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-no-preload-306088     200m (2%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-79hf6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-306088              100m (1%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-cbm6p               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-bmfvn    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-5bdqx         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  0 (0%)
	  memory             370Mi (1%)  170Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeHasSufficientPID     19m                kubelet          Node no-preload-306088 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node no-preload-306088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node no-preload-306088 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           19m                node-controller  Node no-preload-306088 event: Registered Node no-preload-306088 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-306088 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-306088 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-306088 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-306088 event: Registered Node no-preload-306088 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 52 8f 99 59 79 53 08 06
	[  +0.010443] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 52 ef 7b 7a 25 80 08 06
	[Sep29 12:05] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff a6 2f 1f 69 18 cd 08 06
	[  +1.465609] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 6e fa a1 d1 16 fd 08 06
	[  +0.010904] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 7a 28 d0 79 65 86 08 06
	[ +11.321410] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 56 4d be 93 b2 64 08 06
	[  +0.030376] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff 6a d1 94 90 6f a6 08 06
	[  +0.372330] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff 2a ae 62 92 9c b4 08 06
	[Sep29 12:06] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff be c7 f6 43 2b 7f 08 06
	[ +17.127071] IPv4: martian source 10.244.0.1 from 10.244.0.6, on dev eth0
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 9a de e7 85 72 24 08 06
	[ +12.501214] IPv4: martian source 10.244.0.1 from 10.244.0.5, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 4d 9c c6 34 d5 08 06
	[Sep29 12:24] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ee 8f 0c 17 b8 91 08 06
	[Sep29 12:25] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 2e 5f 3c 17 4f d8 08 06
	
	
	==> etcd [a85939dbef50] <==
	{"level":"warn","ts":"2025-09-29T12:06:46.485414Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37274","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.492081Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37306","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.498746Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.504643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.510868Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37356","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.530053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.536234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37402","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.542639Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.548785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.555709Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.565537Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37466","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.572659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.580145Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:06:46.635662Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37512","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:16:46.137244Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1031}
	{"level":"info","ts":"2025-09-29T12:16:46.157820Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1031,"took":"20.241764ms","hash":73998397,"current-db-size-bytes":3104768,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1241088,"current-db-size-in-use":"1.2 MB"}
	{"level":"info","ts":"2025-09-29T12:16:46.157920Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":73998397,"revision":1031,"compact-revision":-1}
	{"level":"warn","ts":"2025-09-29T12:17:48.520132Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"120.407546ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-29T12:17:48.520245Z","caller":"traceutil/trace.go:172","msg":"trace[151808702] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1343; }","duration":"120.553269ms","start":"2025-09-29T12:17:48.399672Z","end":"2025-09-29T12:17:48.520225Z","steps":["trace[151808702] 'agreement among raft nodes before linearized reading'  (duration: 63.964158ms)","trace[151808702] 'range keys from in-memory index tree'  (duration: 56.402188ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T12:17:48.520253Z","caller":"traceutil/trace.go:172","msg":"trace[749181551] transaction","detail":"{read_only:false; response_revision:1345; number_of_response:1; }","duration":"119.622132ms","start":"2025-09-29T12:17:48.400617Z","end":"2025-09-29T12:17:48.520239Z","steps":["trace[749181551] 'process raft request'  (duration: 119.573279ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-29T12:17:48.520308Z","caller":"traceutil/trace.go:172","msg":"trace[1975213384] transaction","detail":"{read_only:false; response_revision:1344; number_of_response:1; }","duration":"122.554128ms","start":"2025-09-29T12:17:48.397736Z","end":"2025-09-29T12:17:48.520290Z","steps":["trace[1975213384] 'process raft request'  (duration: 65.952782ms)","trace[1975213384] 'compare'  (duration: 56.378968ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-29T12:21:46.141866Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1285}
	{"level":"info","ts":"2025-09-29T12:21:46.144517Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1285,"took":"2.320431ms","hash":3319831616,"current-db-size-bytes":3104768,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1753088,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T12:21:46.144553Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3319831616,"revision":1285,"compact-revision":1031}
	{"level":"info","ts":"2025-09-29T12:24:16.184224Z","caller":"traceutil/trace.go:172","msg":"trace[1624405302] transaction","detail":"{read_only:false; response_revision:1671; number_of_response:1; }","duration":"113.508407ms","start":"2025-09-29T12:24:16.070694Z","end":"2025-09-29T12:24:16.184202Z","steps":["trace[1624405302] 'process raft request'  (duration: 113.361103ms)"],"step_count":1}
	
	
	==> etcd [b128aa5b2b94] <==
	{"level":"warn","ts":"2025-09-29T12:05:30.476645Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.484557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.492020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.499821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.514208Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.528213Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T12:05:30.590557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32770","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T12:06:25.685500Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T12:06:25.685576Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"no-preload-306088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	{"level":"error","ts":"2025-09-29T12:06:25.686267Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:32.688844Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T12:06:32.688948Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.689026Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"dfc97eb0aae75b33","current-leader-member-id":"dfc97eb0aae75b33"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689025Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689052Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689074Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T12:06:32.689085Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.94.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T12:06:32.689087Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"error","ts":"2025-09-29T12:06:32.689087Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.689099Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-09-29T12:06:32.689097Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.693452Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"error","ts":"2025-09-29T12:06:32.693510Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.94.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T12:06:32.693543Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-09-29T12:06:32.693553Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"no-preload-306088","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"]}
	
	
	==> kernel <==
	 12:25:26 up  2:07,  0 users,  load average: 1.26, 0.90, 1.56
	Linux no-preload-306088 6.8.0-1040-gcp #42~22.04.1-Ubuntu SMP Tue Sep  9 13:30:57 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [7a7e42d61c6c] <==
	W0929 12:06:34.632071       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.661092       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.739074       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.766705       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.772151       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.891739       1 logging.go:55] [core] [Channel #191 SubChannel #193]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.929280       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.933672       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.960640       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:34.973054       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.079867       1 logging.go:55] [core] [Channel #67 SubChannel #69]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.188159       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.189459       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.199851       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.203109       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.380388       1 logging.go:55] [core] [Channel #99 SubChannel #101]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.387433       1 logging.go:55] [core] [Channel #87 SubChannel #89]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.407230       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.424449       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.473223       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.524023       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.551036       1 logging.go:55] [core] [Channel #215 SubChannel #217]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.558146       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.587154       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 12:06:35.591740       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [9703afde994b] <==
	 > logger="UnhandledError"
	I0929 12:21:48.098902       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:22:14.881731       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:22:48.098029       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:22:48.098080       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:22:48.098100       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:22:48.099154       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:22:48.099252       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:22:48.099265       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 12:22:56.693795       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:23:36.470115       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:24:16.752758       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 12:24:46.609668       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 12:24:48.098327       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:24:48.098373       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 12:24:48.098386       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 12:24:48.099482       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 12:24:48.099562       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 12:24:48.099574       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [58da5b85bf37] <==
	I0929 12:05:38.078703       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 12:05:38.078838       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0929 12:05:38.078860       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 12:05:38.078984       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 12:05:38.079008       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 12:05:38.079009       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 12:05:38.078989       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 12:05:38.079592       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 12:05:38.079606       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0929 12:05:38.080747       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 12:05:38.082019       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 12:05:38.082094       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 12:05:38.082132       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 12:05:38.082139       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 12:05:38.082145       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 12:05:38.083117       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 12:05:38.084329       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:38.084349       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 12:05:38.088568       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 12:05:38.089478       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-306088" podCIDRs=["10.244.0.0/24"]
	I0929 12:05:38.096559       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 12:05:38.101865       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:38.107018       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 12:05:38.107033       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 12:05:38.107048       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [7ede5c29532f] <==
	I0929 12:19:19.922157       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:19:49.833462       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:19:49.929030       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:20:19.838076       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:20:19.936537       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:20:49.842944       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:20:49.944348       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:21:19.847544       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:21:19.952101       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:21:49.851807       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:21:49.959493       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:22:19.856441       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:22:19.967035       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:22:49.861205       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:22:49.974521       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:23:19.865385       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:23:19.981344       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:23:49.869994       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:23:49.988119       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:24:19.874525       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:24:19.994967       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:24:49.878849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:24:50.001839       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 12:25:19.883684       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 12:25:20.009462       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [2ed702618e45] <==
	I0929 12:05:39.837498       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:05:39.943552       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:05:40.044666       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:05:40.044966       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 12:05:40.045591       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:05:40.119388       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:05:40.119455       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:05:40.133167       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:05:40.134809       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:05:40.134834       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:05:40.137295       1 config.go:200] "Starting service config controller"
	I0929 12:05:40.137327       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:05:40.137561       1 config.go:309] "Starting node config controller"
	I0929 12:05:40.137625       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:05:40.138057       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:05:40.138085       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:05:40.139064       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:05:40.141993       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:05:40.142014       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:05:40.238427       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:05:40.238444       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 12:05:40.238465       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [63e413deaec6] <==
	I0929 12:06:48.166300       1 server_linux.go:53] "Using iptables proxy"
	I0929 12:06:48.227574       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 12:06:48.327779       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 12:06:48.327841       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E0929 12:06:48.328000       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 12:06:48.355101       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 12:06:48.355193       1 server_linux.go:132] "Using iptables Proxier"
	I0929 12:06:48.361175       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 12:06:48.361551       1 server.go:527] "Version info" version="v1.34.0"
	I0929 12:06:48.361572       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:48.363070       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 12:06:48.363239       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 12:06:48.363137       1 config.go:106] "Starting endpoint slice config controller"
	I0929 12:06:48.363385       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 12:06:48.363166       1 config.go:309] "Starting node config controller"
	I0929 12:06:48.363408       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 12:06:48.363414       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 12:06:48.363096       1 config.go:200] "Starting service config controller"
	I0929 12:06:48.363465       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 12:06:48.463686       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 12:06:48.463718       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 12:06:48.463732       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [2e89a50fa22a] <==
	I0929 12:06:45.657172       1 serving.go:386] Generated self-signed cert in-memory
	W0929 12:06:47.054776       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 12:06:47.054807       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 12:06:47.054820       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 12:06:47.054830       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 12:06:47.088813       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 12:06:47.088847       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 12:06:47.092925       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:47.092970       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:47.092972       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 12:06:47.093624       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 12:06:47.193859       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [ff7fabe12bd9] <==
	E0929 12:05:31.102158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:05:31.102124       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 12:05:31.102405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:31.102404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:31.102549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:31.910906       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 12:05:31.922100       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 12:05:31.953399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 12:05:32.007111       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 12:05:32.021511       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 12:05:32.024706       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 12:05:32.130675       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 12:05:32.139772       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 12:05:32.163992       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0929 12:05:32.169052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 12:05:32.183135       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 12:05:32.199506       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 12:05:32.207629       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 12:05:32.291748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	I0929 12:05:35.396173       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 12:06:25.685308       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 12:06:25.685457       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 12:06:25.685480       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 12:06:25.685540       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 12:06:25.685564       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 12:23:47 no-preload-306088 kubelet[1344]: E0929 12:23:47.486975    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:23:47 no-preload-306088 kubelet[1344]: E0929 12:23:47.486976    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:23:51 no-preload-306088 kubelet[1344]: E0929 12:23:51.484477    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:24:02 no-preload-306088 kubelet[1344]: E0929 12:24:02.486626    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:24:02 no-preload-306088 kubelet[1344]: E0929 12:24:02.486999    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:24:05 no-preload-306088 kubelet[1344]: E0929 12:24:05.484272    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:24:13 no-preload-306088 kubelet[1344]: E0929 12:24:13.484568    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:24:13 no-preload-306088 kubelet[1344]: E0929 12:24:13.484638    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:24:19 no-preload-306088 kubelet[1344]: E0929 12:24:19.484863    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:24:24 no-preload-306088 kubelet[1344]: E0929 12:24:24.490741    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:24:27 no-preload-306088 kubelet[1344]: E0929 12:24:27.485062    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:24:30 no-preload-306088 kubelet[1344]: E0929 12:24:30.485327    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:24:37 no-preload-306088 kubelet[1344]: E0929 12:24:37.484916    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:24:41 no-preload-306088 kubelet[1344]: E0929 12:24:41.484687    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:24:42 no-preload-306088 kubelet[1344]: E0929 12:24:42.484837    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:24:48 no-preload-306088 kubelet[1344]: E0929 12:24:48.485602    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:24:54 no-preload-306088 kubelet[1344]: E0929 12:24:54.484777    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:24:57 no-preload-306088 kubelet[1344]: E0929 12:24:57.484597    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:25:01 no-preload-306088 kubelet[1344]: E0929 12:25:01.485576    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:25:08 no-preload-306088 kubelet[1344]: E0929 12:25:08.485403    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:25:10 no-preload-306088 kubelet[1344]: E0929 12:25:10.485253    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	Sep 29 12:25:12 no-preload-306088 kubelet[1344]: E0929 12:25:12.484980    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:25:23 no-preload-306088 kubelet[1344]: E0929 12:25:23.484983    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.94.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-cbm6p" podUID="e65b594e-5e46-445b-8dc4-ff9d686cdc94"
	Sep 29 12:25:24 no-preload-306088 kubelet[1344]: E0929 12:25:24.484810    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-5bdqx" podUID="d037c2d3-033d-420d-b665-eef2dd2e36bd"
	Sep 29 12:25:25 no-preload-306088 kubelet[1344]: E0929 12:25:25.485079    1344 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-bmfvn" podUID="29b96462-9943-4cf7-9594-3a853b33daf7"
	
	
	==> storage-provisioner [04de2f2efa33] <==
	I0929 12:06:48.101582       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 12:07:18.104409       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [6069d4cc945c] <==
	W0929 12:25:02.381656       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:04.385806       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:04.391557       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:06.395813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:06.403257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:08.405913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:08.417642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:10.421132       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:10.425011       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:12.428411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:12.433809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:14.436378       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:14.440277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:16.444458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:16.448173       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:18.451161       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:18.455922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:20.459279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:20.465055       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:22.468702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:22.472759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:24.475462       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:24.479390       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:26.482857       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 12:25:26.488413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-306088 -n no-preload-306088
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-306088 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-306088 describe pod metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-306088 describe pod metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx: exit status 1 (58.891148ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-cbm6p" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-bmfvn" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5bdqx" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-306088 describe pod metrics-server-746fcd58dc-cbm6p dashboard-metrics-scraper-6ffb444bf9-bmfvn kubernetes-dashboard-855c9754f9-5bdqx: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.32s)

                                                
                                    

Test pass (307/341)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.6
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.0/json-events 3.98
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.06
18 TestDownloadOnly/v1.34.0/DeleteAll 0.21
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 1.08
21 TestBinaryMirror 0.81
22 TestOffline 81.28
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 130.99
29 TestAddons/serial/Volcano 38.64
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.51
35 TestAddons/parallel/Registry 15.78
36 TestAddons/parallel/RegistryCreds 0.65
37 TestAddons/parallel/Ingress 19.22
38 TestAddons/parallel/InspektorGadget 6.21
39 TestAddons/parallel/MetricsServer 6.62
41 TestAddons/parallel/CSI 43.52
42 TestAddons/parallel/Headlamp 16.49
43 TestAddons/parallel/CloudSpanner 5.45
44 TestAddons/parallel/LocalPath 9.07
45 TestAddons/parallel/NvidiaDevicePlugin 6.45
46 TestAddons/parallel/Yakd 10.65
47 TestAddons/parallel/AmdGpuDevicePlugin 6.5
48 TestAddons/StoppedEnableDisable 11.17
49 TestCertOptions 31.85
50 TestCertExpiration 247.68
51 TestDockerFlags 25.35
52 TestForceSystemdFlag 42.99
53 TestForceSystemdEnv 27.42
55 TestKVMDriverInstallOrUpdate 0.65
59 TestErrorSpam/setup 23.14
60 TestErrorSpam/start 0.61
61 TestErrorSpam/status 0.92
62 TestErrorSpam/pause 1.16
63 TestErrorSpam/unpause 1.23
64 TestErrorSpam/stop 10.89
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 60.9
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 47.46
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.06
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.02
76 TestFunctional/serial/CacheCmd/cache/add_local 0.7
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.28
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
84 TestFunctional/serial/ExtraConfig 53.42
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1
87 TestFunctional/serial/LogsFileCmd 0.99
88 TestFunctional/serial/InvalidService 4.7
90 TestFunctional/parallel/ConfigCmd 0.37
92 TestFunctional/parallel/DryRun 0.39
93 TestFunctional/parallel/InternationalLanguage 0.16
94 TestFunctional/parallel/StatusCmd 0.95
98 TestFunctional/parallel/ServiceCmdConnect 8.54
99 TestFunctional/parallel/AddonsCmd 0.12
102 TestFunctional/parallel/SSHCmd 0.63
103 TestFunctional/parallel/CpCmd 1.77
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.7
110 TestFunctional/parallel/NodeLabels 0.06
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.27
114 TestFunctional/parallel/License 0.27
115 TestFunctional/parallel/ServiceCmd/DeployApp 8.2
117 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.54
118 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
120 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.25
121 TestFunctional/parallel/ServiceCmd/List 0.48
122 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
124 TestFunctional/parallel/ServiceCmd/HTTPS 0.34
125 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
129 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
130 TestFunctional/parallel/ServiceCmd/Format 0.34
131 TestFunctional/parallel/ServiceCmd/URL 0.37
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.46
133 TestFunctional/parallel/MountCmd/any-port 7.97
134 TestFunctional/parallel/ProfileCmd/profile_list 0.4
135 TestFunctional/parallel/ProfileCmd/profile_json_output 0.4
136 TestFunctional/parallel/Version/short 0.05
137 TestFunctional/parallel/Version/components 0.47
138 TestFunctional/parallel/DockerEnv/bash 1.02
139 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
140 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.13
141 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.13
142 TestFunctional/parallel/ImageCommands/ImageListShort 0.2
143 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
144 TestFunctional/parallel/ImageCommands/ImageListJson 0.2
145 TestFunctional/parallel/ImageCommands/ImageListYaml 0.21
146 TestFunctional/parallel/ImageCommands/ImageBuild 2.57
147 TestFunctional/parallel/ImageCommands/Setup 0.38
148 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.95
149 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.82
150 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 0.89
151 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.31
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.53
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.35
155 TestFunctional/parallel/MountCmd/specific-port 1.98
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.64
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 98.21
165 TestMultiControlPlane/serial/DeployApp 45.73
166 TestMultiControlPlane/serial/PingHostFromPods 1.14
167 TestMultiControlPlane/serial/AddWorkerNode 14.71
168 TestMultiControlPlane/serial/NodeLabels 0.07
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.94
170 TestMultiControlPlane/serial/CopyFile 16.86
171 TestMultiControlPlane/serial/StopSecondaryNode 11.5
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
173 TestMultiControlPlane/serial/RestartSecondaryNode 66.46
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.96
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 170.98
176 TestMultiControlPlane/serial/DeleteSecondaryNode 9.48
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
178 TestMultiControlPlane/serial/StopCluster 22.83
179 TestMultiControlPlane/serial/RestartCluster 93.92
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.67
181 TestMultiControlPlane/serial/AddSecondaryNode 32.86
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.9
185 TestImageBuild/serial/Setup 21.66
186 TestImageBuild/serial/NormalBuild 0.97
187 TestImageBuild/serial/BuildWithBuildArg 0.65
188 TestImageBuild/serial/BuildWithDockerIgnore 0.46
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.47
193 TestJSONOutput/start/Command 63.79
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.48
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.44
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 5.73
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.21
218 TestKicCustomNetwork/create_custom_network 23.46
219 TestKicCustomNetwork/use_default_bridge_network 23.07
220 TestKicExistingNetwork 24.68
221 TestKicCustomSubnet 23.53
222 TestKicStaticIP 23.48
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 52.24
227 TestMountStart/serial/StartWithMountFirst 7.59
228 TestMountStart/serial/VerifyMountFirst 0.25
229 TestMountStart/serial/StartWithMountSecond 7.51
230 TestMountStart/serial/VerifyMountSecond 0.25
231 TestMountStart/serial/DeleteFirst 1.5
232 TestMountStart/serial/VerifyMountPostDelete 0.25
233 TestMountStart/serial/Stop 1.18
234 TestMountStart/serial/RestartStopped 8.3
235 TestMountStart/serial/VerifyMountPostStop 0.25
238 TestMultiNode/serial/FreshStart2Nodes 44.76
239 TestMultiNode/serial/DeployApp2Nodes 39.74
240 TestMultiNode/serial/PingHostFrom2Pods 0.79
241 TestMultiNode/serial/AddNode 14.48
242 TestMultiNode/serial/MultiNodeLabels 0.07
243 TestMultiNode/serial/ProfileList 0.69
244 TestMultiNode/serial/CopyFile 9.69
245 TestMultiNode/serial/StopNode 2.17
246 TestMultiNode/serial/StartAfterStop 8.7
247 TestMultiNode/serial/RestartKeepsNodes 70.54
248 TestMultiNode/serial/DeleteNode 5.17
249 TestMultiNode/serial/StopMultiNode 21.64
250 TestMultiNode/serial/RestartMultiNode 48.05
251 TestMultiNode/serial/ValidateNameConflict 24.76
256 TestPreload 129.4
258 TestScheduledStopUnix 95.97
259 TestSkaffold 74.77
261 TestInsufficientStorage 9.75
262 TestRunningBinaryUpgrade 46.77
264 TestKubernetesUpgrade 347.48
265 TestMissingContainerUpgrade 95.91
267 TestStoppedBinaryUpgrade/Setup 0.4
268 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
269 TestNoKubernetes/serial/StartWithK8s 43.68
270 TestStoppedBinaryUpgrade/Upgrade 68.16
271 TestNoKubernetes/serial/StartWithStopK8s 17.31
272 TestNoKubernetes/serial/Start 7.11
273 TestNoKubernetes/serial/VerifyK8sNotRunning 0.32
274 TestNoKubernetes/serial/ProfileList 4.47
275 TestStoppedBinaryUpgrade/MinikubeLogs 1
276 TestNoKubernetes/serial/Stop 1.22
277 TestNoKubernetes/serial/StartNoArgs 8.48
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
280 TestPause/serial/Start 60.78
281 TestPause/serial/SecondStartNoReconfiguration 51.43
300 TestPause/serial/Pause 0.53
301 TestPause/serial/VerifyStatus 0.31
302 TestPause/serial/Unpause 0.47
303 TestPause/serial/PauseAgain 0.54
304 TestPause/serial/DeletePaused 2.23
305 TestPause/serial/VerifyDeletedResources 0.74
306 TestNetworkPlugins/group/auto/Start 42.79
307 TestNetworkPlugins/group/kindnet/Start 57.43
308 TestNetworkPlugins/group/auto/KubeletFlags 0.29
309 TestNetworkPlugins/group/auto/NetCatPod 10.26
310 TestNetworkPlugins/group/auto/DNS 0.19
311 TestNetworkPlugins/group/auto/Localhost 0.15
312 TestNetworkPlugins/group/auto/HairPin 0.16
313 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
315 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
316 TestNetworkPlugins/group/kindnet/NetCatPod 10.29
317 TestNetworkPlugins/group/custom-flannel/Start 48.66
318 TestNetworkPlugins/group/kindnet/DNS 0.16
319 TestNetworkPlugins/group/kindnet/Localhost 0.15
320 TestNetworkPlugins/group/kindnet/HairPin 0.14
321 TestNetworkPlugins/group/false/Start 69.26
322 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
323 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.23
324 TestNetworkPlugins/group/custom-flannel/DNS 0.2
325 TestNetworkPlugins/group/custom-flannel/Localhost 0.13
326 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
327 TestNetworkPlugins/group/enable-default-cni/Start 61.36
328 TestNetworkPlugins/group/false/KubeletFlags 0.29
329 TestNetworkPlugins/group/false/NetCatPod 9.23
330 TestNetworkPlugins/group/false/DNS 0.16
331 TestNetworkPlugins/group/false/Localhost 0.14
332 TestNetworkPlugins/group/false/HairPin 0.15
333 TestNetworkPlugins/group/flannel/Start 115.23
334 TestNetworkPlugins/group/bridge/Start 64.77
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.29
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.2
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.24
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.18
340 TestNetworkPlugins/group/kubenet/Start 63.72
341 TestNetworkPlugins/group/bridge/KubeletFlags 0.33
342 TestNetworkPlugins/group/bridge/NetCatPod 10.19
343 TestNetworkPlugins/group/bridge/DNS 0.14
344 TestNetworkPlugins/group/bridge/Localhost 0.12
345 TestNetworkPlugins/group/bridge/HairPin 0.14
347 TestStartStop/group/old-k8s-version/serial/FirstStart 38.83
348 TestNetworkPlugins/group/flannel/ControllerPod 6.01
349 TestNetworkPlugins/group/kubenet/KubeletFlags 0.29
350 TestNetworkPlugins/group/kubenet/NetCatPod 10.23
351 TestNetworkPlugins/group/flannel/KubeletFlags 0.37
352 TestNetworkPlugins/group/flannel/NetCatPod 10.31
353 TestNetworkPlugins/group/kubenet/DNS 0.15
354 TestNetworkPlugins/group/kubenet/Localhost 0.15
355 TestNetworkPlugins/group/kubenet/HairPin 0.14
356 TestNetworkPlugins/group/flannel/DNS 0.18
357 TestNetworkPlugins/group/flannel/Localhost 0.14
358 TestNetworkPlugins/group/flannel/HairPin 0.14
359 TestStartStop/group/old-k8s-version/serial/DeployApp 9.37
360 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.25
362 TestStartStop/group/embed-certs/serial/FirstStart 66.56
363 TestStartStop/group/old-k8s-version/serial/Stop 12.31
365 TestStartStop/group/no-preload/serial/FirstStart 74.88
367 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.51
368 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.23
369 TestStartStop/group/old-k8s-version/serial/SecondStart 50.77
370 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.27
371 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.78
372 TestStartStop/group/default-k8s-diff-port/serial/Stop 10.8
374 TestStartStop/group/embed-certs/serial/DeployApp 8.25
375 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.17
376 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 51.72
377 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.85
378 TestStartStop/group/embed-certs/serial/Stop 10.83
379 TestStartStop/group/no-preload/serial/DeployApp 8.33
380 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
381 TestStartStop/group/embed-certs/serial/SecondStart 48.42
382 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.83
383 TestStartStop/group/no-preload/serial/Stop 10.79
384 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
385 TestStartStop/group/no-preload/serial/SecondStart 46.53
393 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
394 TestStartStop/group/old-k8s-version/serial/Pause 2.3
396 TestStartStop/group/newest-cni/serial/FirstStart 28.23
397 TestStartStop/group/newest-cni/serial/DeployApp 0
398 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.74
399 TestStartStop/group/newest-cni/serial/Stop 10.79
400 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
401 TestStartStop/group/newest-cni/serial/SecondStart 12.95
402 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.23
403 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
404 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
405 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
406 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.61
407 TestStartStop/group/newest-cni/serial/Pause 2.51
408 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.21
409 TestStartStop/group/embed-certs/serial/Pause 2.22
410 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
411 TestStartStop/group/no-preload/serial/Pause 2.18
x
+
TestDownloadOnly/v1.28.0/json-events (4.6s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-544684 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-544684 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (4.597797376s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.60s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 11:12:02.591803  360782 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0929 11:12:02.591922  360782 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-544684
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-544684: exit status 85 (62.064803ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-544684 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-544684 │ jenkins │ v1.37.0 │ 29 Sep 25 11:11 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:11:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:11:58.035518  360794 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:11:58.035788  360794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:11:58.035809  360794 out.go:374] Setting ErrFile to fd 2...
	I0929 11:11:58.035814  360794 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:11:58.036043  360794 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	W0929 11:11:58.036187  360794 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21655-357219/.minikube/config/config.json: open /home/jenkins/minikube-integration/21655-357219/.minikube/config/config.json: no such file or directory
	I0929 11:11:58.036644  360794 out.go:368] Setting JSON to true
	I0929 11:11:58.037665  360794 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3262,"bootTime":1759141056,"procs":293,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:11:58.037755  360794 start.go:140] virtualization: kvm guest
	I0929 11:11:58.039928  360794 out.go:99] [download-only-544684] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:11:58.040061  360794 notify.go:220] Checking for updates...
	W0929 11:11:58.040087  360794 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 11:11:58.041599  360794 out.go:171] MINIKUBE_LOCATION=21655
	I0929 11:11:58.043326  360794 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:11:58.044850  360794 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 11:11:58.046117  360794 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 11:11:58.047351  360794 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 11:11:58.049952  360794 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:11:58.050199  360794 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:11:58.073433  360794 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:11:58.073501  360794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:11:58.127859  360794 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 11:11:58.117015806 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:11:58.128007  360794 docker.go:318] overlay module found
	I0929 11:11:58.129944  360794 out.go:99] Using the docker driver based on user configuration
	I0929 11:11:58.129981  360794 start.go:304] selected driver: docker
	I0929 11:11:58.129989  360794 start.go:924] validating driver "docker" against <nil>
	I0929 11:11:58.130081  360794 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:11:58.182401  360794 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-29 11:11:58.172673835 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:11:58.182569  360794 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:11:58.183339  360794 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 11:11:58.183553  360794 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:11:58.185810  360794 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-544684 host does not exist
	  To start a cluster, run: "minikube start -p download-only-544684"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-544684
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (3.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-587859 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-587859 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (3.980961134s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (3.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 11:12:06.982982  360782 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0929 11:12:06.983024  360782 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21655-357219/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-587859
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-587859: exit status 85 (63.696322ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-544684 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-544684 │ jenkins │ v1.37.0 │ 29 Sep 25 11:11 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 11:12 UTC │ 29 Sep 25 11:12 UTC │
	│ delete  │ -p download-only-544684                                                                                                                                                       │ download-only-544684 │ jenkins │ v1.37.0 │ 29 Sep 25 11:12 UTC │ 29 Sep 25 11:12 UTC │
	│ start   │ -o=json --download-only -p download-only-587859 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-587859 │ jenkins │ v1.37.0 │ 29 Sep 25 11:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 11:12:03
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 11:12:03.042675  361145 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:12:03.042959  361145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:12:03.042969  361145 out.go:374] Setting ErrFile to fd 2...
	I0929 11:12:03.042973  361145 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:12:03.043183  361145 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:12:03.043665  361145 out.go:368] Setting JSON to true
	I0929 11:12:03.044705  361145 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3267,"bootTime":1759141056,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:12:03.044797  361145 start.go:140] virtualization: kvm guest
	I0929 11:12:03.046763  361145 out.go:99] [download-only-587859] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:12:03.046918  361145 notify.go:220] Checking for updates...
	I0929 11:12:03.048297  361145 out.go:171] MINIKUBE_LOCATION=21655
	I0929 11:12:03.049569  361145 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:12:03.050760  361145 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 11:12:03.051945  361145 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 11:12:03.053170  361145 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0929 11:12:03.055489  361145 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 11:12:03.055697  361145 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:12:03.078259  361145 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:12:03.078362  361145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:12:03.130631  361145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 11:12:03.121154618 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:12:03.130743  361145 docker.go:318] overlay module found
	I0929 11:12:03.132675  361145 out.go:99] Using the docker driver based on user configuration
	I0929 11:12:03.132712  361145 start.go:304] selected driver: docker
	I0929 11:12:03.132718  361145 start.go:924] validating driver "docker" against <nil>
	I0929 11:12:03.132808  361145 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:12:03.187041  361145 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-29 11:12:03.176033307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:12:03.187245  361145 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 11:12:03.187754  361145 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0929 11:12:03.187942  361145 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 11:12:03.190169  361145 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-587859 host does not exist
	  To start a cluster, run: "minikube start -p download-only-587859"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-587859
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.08s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-633243 --alsologtostderr --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "download-docker-633243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-633243
--- PASS: TestDownloadOnlyKic (1.08s)

                                                
                                    
x
+
TestBinaryMirror (0.81s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 11:12:08.732858  360782 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-312508 --alsologtostderr --binary-mirror http://127.0.0.1:34407 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-312508" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-312508
--- PASS: TestBinaryMirror (0.81s)

                                                
                                    
x
+
TestOffline (81.28s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-978842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-978842 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m19.084075192s)
helpers_test.go:175: Cleaning up "offline-docker-978842" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-978842
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-978842: (2.191138287s)
--- PASS: TestOffline (81.28s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-323939
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-323939: exit status 85 (53.460757ms)

                                                
                                                
-- stdout --
	* Profile "addons-323939" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-323939"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-323939
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-323939: exit status 85 (54.300186ms)

                                                
                                                
-- stdout --
	* Profile "addons-323939" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-323939"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (130.99s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-323939 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-323939 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m10.992014345s)
--- PASS: TestAddons/Setup (130.99s)

                                                
                                    
x
+
TestAddons/serial/Volcano (38.64s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 15.454714ms
addons_test.go:884: volcano-controller stabilized in 15.765549ms
addons_test.go:868: volcano-scheduler stabilized in 16.326041ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-9hlds" [7a06f7c5-cf5d-4e62-b7cb-8c840c540f31] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.003535272s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-bqm9x" [1fab1e4d-5036-465f-aa81-134e1b27363c] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003813567s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-shnxf" [8d5e7c23-0ae6-406a-94c7-f02cfa20c7d1] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003374941s
addons_test.go:903: (dbg) Run:  kubectl --context addons-323939 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-323939 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-323939 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [f5b8c582-075e-47b8-b468-6e9103966bae] Pending
helpers_test.go:352: "test-job-nginx-0" [f5b8c582-075e-47b8-b468-6e9103966bae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [f5b8c582-075e-47b8-b468-6e9103966bae] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 12.00379672s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323939 addons disable volcano --alsologtostderr -v=1: (11.273298643s)
--- PASS: TestAddons/serial/Volcano (38.64s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-323939 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-323939 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-323939 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-323939 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [088f233f-45da-4bb4-82f6-5c299e4cfa4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [088f233f-45da-4bb4-82f6-5c299e4cfa4a] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.004045862s
addons_test.go:694: (dbg) Run:  kubectl --context addons-323939 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-323939 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-323939 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.78s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.325153ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-4cp52" [66c6c8ac-07e5-42ea-b90d-9a11746ab14c] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.002979679s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-dq7gw" [655737e2-8885-4102-9c74-ecad527afb36] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003754454s
addons_test.go:392: (dbg) Run:  kubectl --context addons-323939 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-323939 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-323939 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.035202116s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 ip
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.78s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.455542ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-323939
addons_test.go:332: (dbg) Run:  kubectl --context addons-323939 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.65s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-323939 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-323939 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-323939 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [92dd7bc7-5557-4cdb-8bab-1f98a20e606e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [92dd7bc7-5557-4cdb-8bab-1f98a20e606e] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004251294s
I0929 11:15:41.934723  360782 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-323939 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323939 addons disable ingress-dns --alsologtostderr -v=1: (1.335819997s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323939 addons disable ingress --alsologtostderr -v=1: (7.626444375s)
--- PASS: TestAddons/parallel/Ingress (19.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.21s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-7wmbb" [13248b39-08d2-42f9-9eb8-42574385afed] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.003199715s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.21s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.194658ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-99fn6" [a6f5b61d-e7a0-4c71-83bf-78c7bfe5f627] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00327862s
addons_test.go:463: (dbg) Run:  kubectl --context addons-323939 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable metrics-server --alsologtostderr -v=1
2025/09/29 11:15:31 [DEBUG] GET http://192.168.49.2:5000
--- PASS: TestAddons/parallel/MetricsServer (6.62s)

                                                
                                    
x
+
TestAddons/parallel/CSI (43.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 11:15:23.137293  360782 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 11:15:23.140562  360782 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 11:15:23.140591  360782 kapi.go:107] duration metric: took 3.322404ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.335172ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-323939 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-323939 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [8245e053-9c31-4349-8f88-20a8752a5285] Pending
helpers_test.go:352: "task-pv-pod" [8245e053-9c31-4349-8f88-20a8752a5285] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [8245e053-9c31-4349-8f88-20a8752a5285] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003655473s
addons_test.go:572: (dbg) Run:  kubectl --context addons-323939 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-323939 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-323939 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-323939 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-323939 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-323939 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-323939 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [d45826cd-c4f7-4544-87c9-f2108e0c6390] Pending
helpers_test.go:352: "task-pv-pod-restore" [d45826cd-c4f7-4544-87c9-f2108e0c6390] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [d45826cd-c4f7-4544-87c9-f2108e0c6390] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003895401s
addons_test.go:614: (dbg) Run:  kubectl --context addons-323939 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-323939 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-323939 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323939 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.487091753s)
--- PASS: TestAddons/parallel/CSI (43.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.49s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-323939 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-4s6km" [dcc99f90-e87a-4ff8-9db2-638bba45458b] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-4s6km" [dcc99f90-e87a-4ff8-9db2-638bba45458b] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-4s6km" [dcc99f90-e87a-4ff8-9db2-638bba45458b] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003547422s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323939 addons disable headlamp --alsologtostderr -v=1: (5.74571266s)
--- PASS: TestAddons/parallel/Headlamp (16.49s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-mhtz9" [17f1c8cd-103c-49a0-bad9-2450a69486ba] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003444119s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.45s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.07s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-323939 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-323939 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-323939 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [ee31e504-d1fe-4a3a-bc91-d036edade264] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [ee31e504-d1fe-4a3a-bc91-d036edade264] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [ee31e504-d1fe-4a3a-bc91-d036edade264] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003157082s
addons_test.go:967: (dbg) Run:  kubectl --context addons-323939 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 ssh "cat /opt/local-path-provisioner/pvc-306adacc-0887-4e78-8d22-a9f979a38885_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-323939 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-323939 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.07s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-5lbwv" [a6f2d21c-dcea-4083-bf90-ad94ca13eafc] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003444554s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-6868b" [372d66bd-978f-4a01-9b46-a0723d78ee3f] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004534084s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-323939 addons disable yakd --alsologtostderr -v=1: (5.644297382s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.5s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-fk7vg" [801e952c-3947-4a57-92c0-c0f6e7082c01] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.0048138s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-323939 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.50s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.17s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-323939
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-323939: (10.926203282s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-323939
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-323939
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-323939
--- PASS: TestAddons/StoppedEnableDisable (11.17s)

                                                
                                    
x
+
TestCertOptions (31.85s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-472393 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-472393 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (28.852560859s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-472393 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-472393 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-472393 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-472393" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-472393
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-472393: (2.31756926s)
--- PASS: TestCertOptions (31.85s)

                                                
                                    
x
+
TestCertExpiration (247.68s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-788277 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-788277 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (29.03080615s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-788277 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
E0929 11:59:49.576925  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-788277 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (36.398347586s)
helpers_test.go:175: Cleaning up "cert-expiration-788277" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-788277
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-788277: (2.246037941s)
--- PASS: TestCertExpiration (247.68s)

                                                
                                    
x
+
TestDockerFlags (25.35s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-057491 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-057491 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (22.511186634s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-057491 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-057491 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-057491" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-057491
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-057491: (2.192402441s)
--- PASS: TestDockerFlags (25.35s)

                                                
                                    
x
+
TestForceSystemdFlag (42.99s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-296523 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-296523 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.36116539s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-296523 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-296523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-296523
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-296523: (2.262284682s)
--- PASS: TestForceSystemdFlag (42.99s)

                                                
                                    
x
+
TestForceSystemdEnv (27.42s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-274975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-274975 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (24.915176693s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-274975 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-274975" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-274975
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-274975: (2.192736701s)
--- PASS: TestForceSystemdEnv (27.42s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0929 11:57:05.405746  360782 install.go:66] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0929 11:57:05.406143  360782 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3258845960/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:57:05.449599  360782 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3258845960/001/docker-machine-driver-kvm2 version is 1.1.1
W0929 11:57:05.449645  360782 install.go:76] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0
W0929 11:57:05.449775  360782 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0929 11:57:05.449826  360782 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3258845960/001/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (0.65s)

                                                
                                    
x
+
TestErrorSpam/setup (23.14s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-129723 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-129723 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-129723 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-129723 --driver=docker  --container-runtime=docker: (23.137700558s)
--- PASS: TestErrorSpam/setup (23.14s)

                                                
                                    
x
+
TestErrorSpam/start (0.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 start --dry-run
--- PASS: TestErrorSpam/start (0.61s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (1.16s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 pause
--- PASS: TestErrorSpam/pause (1.16s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.23s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 unpause
--- PASS: TestErrorSpam/unpause (1.23s)

                                                
                                    
x
+
TestErrorSpam/stop (10.89s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 stop: (10.706450155s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-129723 --log_dir /tmp/nospam-129723 stop
--- PASS: TestErrorSpam/stop (10.89s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21655-357219/.minikube/files/etc/test/nested/copy/360782/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (60.9s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113333 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-113333 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m0.903017627s)
--- PASS: TestFunctional/serial/StartWithProxy (60.90s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 11:18:01.178126  360782 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113333 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-113333 --alsologtostderr -v=8: (47.457479486s)
functional_test.go:678: soft start took 47.458242605s for "functional-113333" cluster.
I0929 11:18:48.636026  360782 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (47.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-113333 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.02s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.02s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.7s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-113333 /tmp/TestFunctionalserialCacheCmdcacheadd_local790391574/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cache add minikube-local-cache-test:functional-113333
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cache delete minikube-local-cache-test:functional-113333
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-113333
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.70s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (279.560871ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 kubectl -- --context functional-113333 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-113333 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (53.42s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113333 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 11:19:20.597071  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:20.603444  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:20.614806  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:20.636165  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:20.677555  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:20.758961  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:20.920472  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:21.242174  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:21.884099  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:23.166111  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:25.728834  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:30.850209  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:19:41.091612  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-113333 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (53.423963602s)
functional_test.go:776: restart took 53.42411223s for "functional-113333" cluster.
I0929 11:19:46.861184  360782 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (53.42s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-113333 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-113333 logs: (1.002307319s)
--- PASS: TestFunctional/serial/LogsCmd (1.00s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 logs --file /tmp/TestFunctionalserialLogsFileCmd3026648552/001/logs.txt
--- PASS: TestFunctional/serial/LogsFileCmd (0.99s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.7s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-113333 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-113333
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-113333: exit status 115 (330.415118ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:31860 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-113333 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-113333 delete -f testdata/invalidsvc.yaml: (1.205142326s)
--- PASS: TestFunctional/serial/InvalidService (4.70s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 config get cpus: exit status 14 (79.699604ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 config get cpus: exit status 14 (56.809385ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113333 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-113333 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (164.966136ms)

                                                
                                                
-- stdout --
	* [functional-113333] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:20:04.109475  408688 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:04.109728  408688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.109739  408688 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:04.109744  408688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.109972  408688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:20:04.110404  408688 out.go:368] Setting JSON to false
	I0929 11:20:04.111461  408688 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3748,"bootTime":1759141056,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:20:04.111558  408688 start.go:140] virtualization: kvm guest
	I0929 11:20:04.114468  408688 out.go:179] * [functional-113333] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0929 11:20:04.115698  408688 notify.go:220] Checking for updates...
	I0929 11:20:04.115728  408688 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:20:04.116914  408688 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:04.117997  408688 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 11:20:04.119054  408688 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 11:20:04.120394  408688 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:20:04.121612  408688 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:20:04.123353  408688 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:20:04.123914  408688 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:04.150573  408688 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:20:04.150719  408688 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:04.207118  408688 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-29 11:20:04.195862491 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:20:04.207282  408688 docker.go:318] overlay module found
	I0929 11:20:04.209792  408688 out.go:179] * Using the docker driver based on existing profile
	I0929 11:20:04.211041  408688 start.go:304] selected driver: docker
	I0929 11:20:04.211073  408688 start.go:924] validating driver "docker" against &{Name:functional-113333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-113333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:20:04.211180  408688 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:20:04.212949  408688 out.go:203] 
	W0929 11:20:04.214794  408688 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 11:20:04.215920  408688 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113333 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-113333 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-113333 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (164.448856ms)

                                                
                                                
-- stdout --
	* [functional-113333] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:20:04.491921  409081 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:20:04.492007  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492014  409081 out.go:374] Setting ErrFile to fd 2...
	I0929 11:20:04.492018  409081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:20:04.492320  409081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:20:04.492755  409081 out.go:368] Setting JSON to false
	I0929 11:20:04.493767  409081 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3748,"bootTime":1759141056,"procs":254,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1040-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0929 11:20:04.493856  409081 start.go:140] virtualization: kvm guest
	I0929 11:20:04.495673  409081 out.go:179] * [functional-113333] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0929 11:20:04.496907  409081 notify.go:220] Checking for updates...
	I0929 11:20:04.496966  409081 out.go:179]   - MINIKUBE_LOCATION=21655
	I0929 11:20:04.498242  409081 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 11:20:04.499707  409081 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	I0929 11:20:04.501035  409081 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	I0929 11:20:04.505457  409081 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0929 11:20:04.506863  409081 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 11:20:04.509025  409081 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:20:04.509717  409081 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 11:20:04.536233  409081 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0929 11:20:04.536391  409081 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:20:04.596439  409081 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:38 OomKillDisable:false NGoroutines:57 SystemTime:2025-09-29 11:20:04.586118728 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:20:04.596617  409081 docker.go:318] overlay module found
	I0929 11:20:04.598520  409081 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 11:20:04.599774  409081 start.go:304] selected driver: docker
	I0929 11:20:04.599789  409081 start.go:924] validating driver "docker" against &{Name:functional-113333 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-113333 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 11:20:04.599895  409081 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 11:20:04.603063  409081 out.go:203] 
	W0929 11:20:04.604206  409081 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 11:20:04.605379  409081 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-113333 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-113333 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-pvq4m" [5c783e39-8879-4077-b31d-98eaa49231cb] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-pvq4m" [5c783e39-8879-4077-b31d-98eaa49231cb] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00343029s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:30170
functional_test.go:1680: http://192.168.49.2:30170: success! body:
Request served by hello-node-connect-7d85dfc575-pvq4m

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:30170
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.54s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh -n functional-113333 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cp functional-113333:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd4155561734/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh -n functional-113333 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh -n functional-113333 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/360782/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo cat /etc/test/nested/copy/360782/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/360782.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo cat /etc/ssl/certs/360782.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/360782.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo cat /usr/share/ca-certificates/360782.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3607822.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo cat /etc/ssl/certs/3607822.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3607822.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo cat /usr/share/ca-certificates/3607822.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.70s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-113333 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 ssh "sudo systemctl is-active crio": exit status 1 (272.812925ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-113333 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-113333 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-524nr" [9ff7b5c8-fd14-4386-a288-d594685070cd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-524nr" [9ff7b5c8-fd14-4386-a288-d594685070cd] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.00358705s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.20s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-113333 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-113333 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-113333 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-113333 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 406291: os: process already finished
helpers_test.go:525: unable to kill pid 405879: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-113333 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-113333 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [f55c0273-1eae-402f-9c5c-87bb8d901dc9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [f55c0273-1eae-402f-9c5c-87bb8d901dc9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003298726s
I0929 11:20:02.777995  360782 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 service list -o json
functional_test.go:1504: Took "480.282481ms" to run "out/minikube-linux-amd64 -p functional-113333 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-113333 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:31608
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.105.116 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-113333 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31608
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdany-port4048066760/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759144803900459351" to /tmp/TestFunctionalparallelMountCmdany-port4048066760/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759144803900459351" to /tmp/TestFunctionalparallelMountCmdany-port4048066760/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759144803900459351" to /tmp/TestFunctionalparallelMountCmdany-port4048066760/001/test-1759144803900459351
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.412572ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:20:04.204234  360782 retry.go:31] will retry after 661.75227ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 11:20 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 11:20 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 11:20 test-1759144803900459351
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh cat /mount-9p/test-1759144803900459351
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-113333 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [8bea96d3-5522-4807-a3b3-12e6260ba27c] Pending
helpers_test.go:352: "busybox-mount" [8bea96d3-5522-4807-a3b3-12e6260ba27c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [8bea96d3-5522-4807-a3b3-12e6260ba27c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [8bea96d3-5522-4807-a3b3-12e6260ba27c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.00374293s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-113333 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdany-port4048066760/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (7.97s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "345.350918ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "49.956642ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "347.565364ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "50.533562ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-113333 docker-env) && out/minikube-linux-amd64 status -p functional-113333"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-113333 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.02s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113333 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-113333
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-113333
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113333 image ls --format short --alsologtostderr:
I0929 11:20:16.438483  414249 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:16.438741  414249 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:16.438750  414249 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:16.438754  414249 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:16.438991  414249 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:16.439602  414249 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:16.439702  414249 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:16.440078  414249 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:16.458038  414249 ssh_runner.go:195] Run: systemctl --version
I0929 11:20:16.458085  414249 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:16.474846  414249 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:16.567956  414249 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113333 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ localhost/my-image                          │ functional-113333 │ 80030e612f46f │ 1.24MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 56cc512116c8f │ 4.4MB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ df0860106674d │ 71.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ 46169d968e920 │ 52.8MB │
│ docker.io/library/nginx                     │ alpine            │ 4a86014ec6994 │ 52.5MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ 90550c43ad2bc │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ a0af72f2ec6d6 │ 74.9MB │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ 5f1f5298c888d │ 195MB  │
│ docker.io/kicbase/echo-server               │ functional-113333 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ docker.io/library/minikube-local-cache-test │ functional-113333 │ 124a64494e7b1 │ 30B    │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113333 image ls --format table --alsologtostderr:
I0929 11:20:19.620250  414753 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:19.620905  414753 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:19.620923  414753 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:19.620929  414753 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:19.621256  414753 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:19.621867  414753 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:19.621972  414753 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:19.622317  414753 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:19.641145  414753 ssh_runner.go:195] Run: systemctl --version
I0929 11:20:19.641194  414753 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:19.658630  414753 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:19.750840  414753 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0929 11:20:42.534983  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:22:04.456362  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:20.588247  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:24:48.298526  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113333 image ls --format json --alsologtostderr:
[{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"124a64494e7b1290c16cd2566d92a9b9a6be816c522b7e058ed720db3df1a10f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-113333"],"size":"30"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"52800000"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"71900000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"75000000"},{"id":"56cc512116c8f894f11ce199546
0aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"80030e612f46fb84e4aea4d8401320e346a9473bdc0d413bc7ad2d2551657cb2","repoDigests":[],"repoTags":["localhost/my-image:functional-113333"],"size":"1240000"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"88000000"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicb
ase/echo-server:functional-113333","docker.io/kicbase/echo-server:latest"],"size":"4940000"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195000000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"74900000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113333 image ls --format json --alsologtostderr:
I0929 11:20:19.415053  414704 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:19.415313  414704 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:19.415324  414704 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:19.415328  414704 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:19.415569  414704 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:19.416252  414704 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:19.416364  414704 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:19.416769  414704 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:19.434838  414704 ssh_runner.go:195] Run: systemctl --version
I0929 11:20:19.434905  414704 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:19.453007  414704 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:19.545762  414704 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-113333 image ls --format yaml --alsologtostderr:
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "71900000"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "52800000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52500000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-113333
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 124a64494e7b1290c16cd2566d92a9b9a6be816c522b7e058ed720db3df1a10f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-113333
size: "30"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "88000000"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "74900000"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195000000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113333 image ls --format yaml --alsologtostderr:
I0929 11:20:16.642190  414299 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:16.642296  414299 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:16.642305  414299 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:16.642309  414299 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:16.642604  414299 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:16.643221  414299 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:16.643304  414299 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:16.643668  414299 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:16.663449  414299 ssh_runner.go:195] Run: systemctl --version
I0929 11:20:16.663506  414299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:16.680509  414299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:16.772837  414299 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 ssh pgrep buildkitd: exit status 1 (253.648029ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image build -t localhost/my-image:functional-113333 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-113333 image build -t localhost/my-image:functional-113333 testdata/build --alsologtostderr: (2.106136008s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-113333 image build -t localhost/my-image:functional-113333 testdata/build --alsologtostderr:
I0929 11:20:17.099416  414450 out.go:360] Setting OutFile to fd 1 ...
I0929 11:20:17.099522  414450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:17.099534  414450 out.go:374] Setting ErrFile to fd 2...
I0929 11:20:17.099542  414450 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 11:20:17.099755  414450 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
I0929 11:20:17.100364  414450 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:17.100985  414450 config.go:182] Loaded profile config "functional-113333": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 11:20:17.101352  414450 cli_runner.go:164] Run: docker container inspect functional-113333 --format={{.State.Status}}
I0929 11:20:17.118720  414450 ssh_runner.go:195] Run: systemctl --version
I0929 11:20:17.118763  414450 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-113333
I0929 11:20:17.137278  414450 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33148 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/functional-113333/id_rsa Username:docker}
I0929 11:20:17.230114  414450 build_images.go:161] Building image from path: /tmp/build.2629793375.tar
I0929 11:20:17.230182  414450 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 11:20:17.240956  414450 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2629793375.tar
I0929 11:20:17.244634  414450 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2629793375.tar: stat -c "%s %y" /var/lib/minikube/build/build.2629793375.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2629793375.tar': No such file or directory
I0929 11:20:17.244671  414450 ssh_runner.go:362] scp /tmp/build.2629793375.tar --> /var/lib/minikube/build/build.2629793375.tar (3072 bytes)
I0929 11:20:17.270132  414450 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2629793375
I0929 11:20:17.281664  414450 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2629793375 -xf /var/lib/minikube/build/build.2629793375.tar
I0929 11:20:17.292516  414450 docker.go:361] Building image: /var/lib/minikube/build/build.2629793375
I0929 11:20:17.292603  414450 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-113333 /var/lib/minikube/build/build.2629793375
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.2s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa done
#5 DONE 0.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.0s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:80030e612f46fb84e4aea4d8401320e346a9473bdc0d413bc7ad2d2551657cb2 done
#8 naming to localhost/my-image:functional-113333 done
#8 DONE 0.0s
I0929 11:20:19.135118  414450 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-113333 /var/lib/minikube/build/build.2629793375: (1.842480236s)
I0929 11:20:19.135229  414450 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2629793375
I0929 11:20:19.145025  414450 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2629793375.tar
I0929 11:20:19.154345  414450 build_images.go:217] Built localhost/my-image:functional-113333 from /tmp/build.2629793375.tar
I0929 11:20:19.154386  414450 build_images.go:133] succeeded building to: functional-113333
I0929 11:20:19.154392  414450 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-113333
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image load --daemon kicbase/echo-server:functional-113333 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image load --daemon kicbase/echo-server:functional-113333 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-113333
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image load --daemon kicbase/echo-server:functional-113333 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (0.89s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image save kicbase/echo-server:functional-113333 /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image rm kicbase/echo-server:functional-113333 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image load /home/jenkins/workspace/Docker_Linux_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-113333
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 image save --daemon kicbase/echo-server:functional-113333 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-113333
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdspecific-port3676981704/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.988754ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:20:12.157393  360782 retry.go:31] will retry after 711.457435ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdspecific-port3676981704/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 ssh "sudo umount -f /mount-9p": exit status 1 (254.121289ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-113333 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdspecific-port3676981704/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.98s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T" /mount1: exit status 1 (308.745128ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 11:20:14.162022  360782 retry.go:31] will retry after 511.505132ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-113333 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-113333 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-113333 /tmp/TestFunctionalparallelMountCmdVerifyCleanup715299586/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.64s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-113333
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-113333
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-113333
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (98.21s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m37.504981857s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (98.21s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (45.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 kubectl -- rollout status deployment/busybox: (3.670627399s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
I0929 11:31:53.976256  360782 retry.go:31] will retry after 1.226733911s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
I0929 11:31:55.327378  360782 retry.go:31] will retry after 1.813573386s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
I0929 11:31:57.256396  360782 retry.go:31] will retry after 1.687169679s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
I0929 11:31:59.071222  360782 retry.go:31] will retry after 2.90133404s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
I0929 11:32:02.090340  360782 retry.go:31] will retry after 4.930388824s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
I0929 11:32:07.140276  360782 retry.go:31] will retry after 10.513075257s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
I0929 11:32:17.772208  360782 retry.go:31] will retry after 16.095273965s: expected 3 Pod IPs but got 2 (may be temporary), output: "\n-- stdout --\n\t'10.244.1.3 10.244.1.2'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-4kjfh -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-b5npr -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-zvlsc -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-4kjfh -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-b5npr -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-zvlsc -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-4kjfh -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-b5npr -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-zvlsc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (45.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-4kjfh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-4kjfh -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-b5npr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-b5npr -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-zvlsc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 kubectl -- exec busybox-7b57f96db7-zvlsc -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (14.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 node add --alsologtostderr -v 5: (13.774295313s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (14.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-271001 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp testdata/cp-test.txt ha-271001:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2611004771/001/cp-test_ha-271001.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001:/home/docker/cp-test.txt ha-271001-m02:/home/docker/cp-test_ha-271001_ha-271001-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test_ha-271001_ha-271001-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001:/home/docker/cp-test.txt ha-271001-m03:/home/docker/cp-test_ha-271001_ha-271001-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test_ha-271001_ha-271001-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001:/home/docker/cp-test.txt ha-271001-m04:/home/docker/cp-test_ha-271001_ha-271001-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test_ha-271001_ha-271001-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp testdata/cp-test.txt ha-271001-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2611004771/001/cp-test_ha-271001-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m02:/home/docker/cp-test.txt ha-271001:/home/docker/cp-test_ha-271001-m02_ha-271001.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test_ha-271001-m02_ha-271001.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m02:/home/docker/cp-test.txt ha-271001-m03:/home/docker/cp-test_ha-271001-m02_ha-271001-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test_ha-271001-m02_ha-271001-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m02:/home/docker/cp-test.txt ha-271001-m04:/home/docker/cp-test_ha-271001-m02_ha-271001-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test_ha-271001-m02_ha-271001-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp testdata/cp-test.txt ha-271001-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2611004771/001/cp-test_ha-271001-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m03:/home/docker/cp-test.txt ha-271001:/home/docker/cp-test_ha-271001-m03_ha-271001.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test_ha-271001-m03_ha-271001.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m03:/home/docker/cp-test.txt ha-271001-m02:/home/docker/cp-test_ha-271001-m03_ha-271001-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test_ha-271001-m03_ha-271001-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m03:/home/docker/cp-test.txt ha-271001-m04:/home/docker/cp-test_ha-271001-m03_ha-271001-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test_ha-271001-m03_ha-271001-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp testdata/cp-test.txt ha-271001-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2611004771/001/cp-test_ha-271001-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m04:/home/docker/cp-test.txt ha-271001:/home/docker/cp-test_ha-271001-m04_ha-271001.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001 "sudo cat /home/docker/cp-test_ha-271001-m04_ha-271001.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m04:/home/docker/cp-test.txt ha-271001-m02:/home/docker/cp-test_ha-271001-m04_ha-271001-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m02 "sudo cat /home/docker/cp-test_ha-271001-m04_ha-271001-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 cp ha-271001-m04:/home/docker/cp-test.txt ha-271001-m03:/home/docker/cp-test_ha-271001-m04_ha-271001-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 ssh -n ha-271001-m03 "sudo cat /home/docker/cp-test_ha-271001-m04_ha-271001-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 node stop m02 --alsologtostderr -v 5: (10.820417099s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5: exit status 7 (681.289114ms)

                                                
                                                
-- stdout --
	ha-271001
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-271001-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-271001-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-271001-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:33:20.349008  448038 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:33:20.349264  448038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:33:20.349275  448038 out.go:374] Setting ErrFile to fd 2...
	I0929 11:33:20.349281  448038 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:33:20.349600  448038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:33:20.349801  448038 out.go:368] Setting JSON to false
	I0929 11:33:20.349838  448038 mustload.go:65] Loading cluster: ha-271001
	I0929 11:33:20.349919  448038 notify.go:220] Checking for updates...
	I0929 11:33:20.350306  448038 config.go:182] Loaded profile config "ha-271001": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:33:20.350330  448038 status.go:174] checking status of ha-271001 ...
	I0929 11:33:20.350839  448038 cli_runner.go:164] Run: docker container inspect ha-271001 --format={{.State.Status}}
	I0929 11:33:20.372636  448038 status.go:371] ha-271001 host status = "Running" (err=<nil>)
	I0929 11:33:20.372698  448038 host.go:66] Checking if "ha-271001" exists ...
	I0929 11:33:20.373125  448038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-271001
	I0929 11:33:20.392383  448038 host.go:66] Checking if "ha-271001" exists ...
	I0929 11:33:20.392634  448038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:33:20.392670  448038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-271001
	I0929 11:33:20.411129  448038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/ha-271001/id_rsa Username:docker}
	I0929 11:33:20.505326  448038 ssh_runner.go:195] Run: systemctl --version
	I0929 11:33:20.509767  448038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:33:20.521690  448038 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:33:20.577856  448038 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-29 11:33:20.566337598 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:33:20.578691  448038 kubeconfig.go:125] found "ha-271001" server: "https://192.168.49.254:8443"
	I0929 11:33:20.578734  448038 api_server.go:166] Checking apiserver status ...
	I0929 11:33:20.578778  448038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:33:20.591751  448038 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2291/cgroup
	W0929 11:33:20.601439  448038 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2291/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:33:20.601494  448038 ssh_runner.go:195] Run: ls
	I0929 11:33:20.605082  448038 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:33:20.609221  448038 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:33:20.609246  448038 status.go:463] ha-271001 apiserver status = Running (err=<nil>)
	I0929 11:33:20.609260  448038 status.go:176] ha-271001 status: &{Name:ha-271001 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:33:20.609282  448038 status.go:174] checking status of ha-271001-m02 ...
	I0929 11:33:20.609530  448038 cli_runner.go:164] Run: docker container inspect ha-271001-m02 --format={{.State.Status}}
	I0929 11:33:20.627187  448038 status.go:371] ha-271001-m02 host status = "Stopped" (err=<nil>)
	I0929 11:33:20.627213  448038 status.go:384] host is not running, skipping remaining checks
	I0929 11:33:20.627222  448038 status.go:176] ha-271001-m02 status: &{Name:ha-271001-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:33:20.627255  448038 status.go:174] checking status of ha-271001-m03 ...
	I0929 11:33:20.627498  448038 cli_runner.go:164] Run: docker container inspect ha-271001-m03 --format={{.State.Status}}
	I0929 11:33:20.646053  448038 status.go:371] ha-271001-m03 host status = "Running" (err=<nil>)
	I0929 11:33:20.646078  448038 host.go:66] Checking if "ha-271001-m03" exists ...
	I0929 11:33:20.646380  448038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-271001-m03
	I0929 11:33:20.663805  448038 host.go:66] Checking if "ha-271001-m03" exists ...
	I0929 11:33:20.664159  448038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:33:20.664232  448038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-271001-m03
	I0929 11:33:20.682291  448038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33163 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/ha-271001-m03/id_rsa Username:docker}
	I0929 11:33:20.775482  448038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:33:20.789669  448038 kubeconfig.go:125] found "ha-271001" server: "https://192.168.49.254:8443"
	I0929 11:33:20.789696  448038 api_server.go:166] Checking apiserver status ...
	I0929 11:33:20.789728  448038 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:33:20.801545  448038 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2186/cgroup
	W0929 11:33:20.811191  448038 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:33:20.811250  448038 ssh_runner.go:195] Run: ls
	I0929 11:33:20.814760  448038 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 11:33:20.818851  448038 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 11:33:20.818893  448038 status.go:463] ha-271001-m03 apiserver status = Running (err=<nil>)
	I0929 11:33:20.818905  448038 status.go:176] ha-271001-m03 status: &{Name:ha-271001-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:33:20.818923  448038 status.go:174] checking status of ha-271001-m04 ...
	I0929 11:33:20.819226  448038 cli_runner.go:164] Run: docker container inspect ha-271001-m04 --format={{.State.Status}}
	I0929 11:33:20.837843  448038 status.go:371] ha-271001-m04 host status = "Running" (err=<nil>)
	I0929 11:33:20.837869  448038 host.go:66] Checking if "ha-271001-m04" exists ...
	I0929 11:33:20.838167  448038 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-271001-m04
	I0929 11:33:20.856764  448038 host.go:66] Checking if "ha-271001-m04" exists ...
	I0929 11:33:20.857116  448038 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:33:20.857156  448038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-271001-m04
	I0929 11:33:20.874936  448038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/ha-271001-m04/id_rsa Username:docker}
	I0929 11:33:20.967957  448038 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:33:20.980051  448038 status.go:176] ha-271001-m04 status: &{Name:ha-271001-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (66.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 node start m02 --alsologtostderr -v 5
E0929 11:34:20.588482  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 node start m02 --alsologtostderr -v 5: (1m5.472543045s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (66.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (170.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 stop --alsologtostderr -v 5
E0929 11:34:53.819006  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:53.825426  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:53.836776  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:53.858170  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:53.899597  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:53.981087  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:54.142644  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:54.464332  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:55.106434  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:56.388036  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:34:58.950976  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 stop --alsologtostderr -v 5: (33.471598943s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 start --wait true --alsologtostderr -v 5
E0929 11:35:04.072390  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:14.314748  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:34.796130  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:35:43.662146  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:36:15.758017  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 start --wait true --alsologtostderr -v 5: (2m17.401821393s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (170.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (9.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 node delete m03 --alsologtostderr -v 5: (8.652517013s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (9.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (22.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 stop --alsologtostderr -v 5
E0929 11:37:37.682089  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 stop --alsologtostderr -v 5: (22.728020922s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5: exit status 7 (102.769561ms)

                                                
                                                
-- stdout --
	ha-271001
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-271001-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-271001-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:37:53.010579  479770 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:37:53.010853  479770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:37:53.010864  479770 out.go:374] Setting ErrFile to fd 2...
	I0929 11:37:53.010869  479770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:37:53.011109  479770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:37:53.011292  479770 out.go:368] Setting JSON to false
	I0929 11:37:53.011327  479770 mustload.go:65] Loading cluster: ha-271001
	I0929 11:37:53.011358  479770 notify.go:220] Checking for updates...
	I0929 11:37:53.011702  479770 config.go:182] Loaded profile config "ha-271001": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:37:53.011721  479770 status.go:174] checking status of ha-271001 ...
	I0929 11:37:53.012085  479770 cli_runner.go:164] Run: docker container inspect ha-271001 --format={{.State.Status}}
	I0929 11:37:53.030483  479770 status.go:371] ha-271001 host status = "Stopped" (err=<nil>)
	I0929 11:37:53.030524  479770 status.go:384] host is not running, skipping remaining checks
	I0929 11:37:53.030533  479770 status.go:176] ha-271001 status: &{Name:ha-271001 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:37:53.030575  479770 status.go:174] checking status of ha-271001-m02 ...
	I0929 11:37:53.030954  479770 cli_runner.go:164] Run: docker container inspect ha-271001-m02 --format={{.State.Status}}
	I0929 11:37:53.048708  479770 status.go:371] ha-271001-m02 host status = "Stopped" (err=<nil>)
	I0929 11:37:53.048735  479770 status.go:384] host is not running, skipping remaining checks
	I0929 11:37:53.048744  479770 status.go:176] ha-271001-m02 status: &{Name:ha-271001-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:37:53.048772  479770 status.go:174] checking status of ha-271001-m04 ...
	I0929 11:37:53.049084  479770 cli_runner.go:164] Run: docker container inspect ha-271001-m04 --format={{.State.Status}}
	I0929 11:37:53.065841  479770 status.go:371] ha-271001-m04 host status = "Stopped" (err=<nil>)
	I0929 11:37:53.065859  479770 status.go:384] host is not running, skipping remaining checks
	I0929 11:37:53.065865  479770 status.go:176] ha-271001-m04 status: &{Name:ha-271001-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (22.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (93.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0929 11:39:20.588220  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m33.133384177s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (93.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (32.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 node add --control-plane --alsologtostderr -v 5
E0929 11:39:53.820063  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-271001 node add --control-plane --alsologtostderr -v 5: (31.968037536s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-271001 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (32.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.90s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (21.66s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -p image-647529 --driver=docker  --container-runtime=docker
E0929 11:40:21.523926  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -p image-647529 --driver=docker  --container-runtime=docker: (21.655476398s)
--- PASS: TestImageBuild/serial/Setup (21.66s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (0.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-647529
--- PASS: TestImageBuild/serial/NormalBuild (0.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-647529
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-647529
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.46s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.47s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-647529
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.47s)

                                                
                                    
x
+
TestJSONOutput/start/Command (63.79s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-311769 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-311769 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m3.785664212s)
--- PASS: TestJSONOutput/start/Command (63.79s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-311769 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.44s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-311769 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.44s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-311769 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-311769 --output=json --user=testUser: (5.727819303s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-709421 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-709421 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (65.603401ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7c0fd7e9-b404-4c8c-bd98-11e39e303694","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-709421] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"2dc1fbe3-3a8e-44bf-b5ab-ac310fd82615","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21655"}}
	{"specversion":"1.0","id":"2c5b17c2-71ba-4002-b366-3f731b0b9946","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"e161a834-1af2-4fed-90e8-2485f914b424","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig"}}
	{"specversion":"1.0","id":"a39f96ca-52d4-4f1f-9642-e712a14096f5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube"}}
	{"specversion":"1.0","id":"2ae87c51-7730-4baa-ac1f-2da3d9052e26","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"80f424cd-010d-4efc-a381-580ebd22e561","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70a72c19-2f50-4969-8a62-32840a02cd1b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-709421" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-709421
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (23.46s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-907316 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-907316 --network=: (21.357599398s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-907316" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-907316
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-907316: (2.086364941s)
--- PASS: TestKicCustomNetwork/create_custom_network (23.46s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.07s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-701410 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-701410 --network=bridge: (21.096825869s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-701410" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-701410
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-701410: (1.954869711s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.07s)

                                                
                                    
x
+
TestKicExistingNetwork (24.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 11:42:34.310950  360782 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 11:42:34.327775  360782 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 11:42:34.327853  360782 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 11:42:34.327890  360782 cli_runner.go:164] Run: docker network inspect existing-network
W0929 11:42:34.344474  360782 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 11:42:34.344511  360782 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 11:42:34.344533  360782 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 11:42:34.344663  360782 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 11:42:34.361670  360782 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-194f2c805d9d IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:f2:95:7f:1a:a5:02} reservation:<nil>}
I0929 11:42:34.362109  360782 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00155f030}
I0929 11:42:34.362139  360782 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 11:42:34.362197  360782 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 11:42:34.417300  360782 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-325652 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-325652 --network=existing-network: (22.621644336s)
helpers_test.go:175: Cleaning up "existing-network-325652" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-325652
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-325652: (1.915324343s)
I0929 11:42:58.971261  360782 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (24.68s)

                                                
                                    
x
+
TestKicCustomSubnet (23.53s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-058684 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-058684 --subnet=192.168.60.0/24: (21.436690814s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-058684 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-058684" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-058684
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-058684: (2.075385033s)
--- PASS: TestKicCustomSubnet (23.53s)

                                                
                                    
x
+
TestKicStaticIP (23.48s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-250608 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-250608 --static-ip=192.168.200.200: (21.279397652s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-250608 ip
helpers_test.go:175: Cleaning up "static-ip-250608" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-250608
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-250608: (2.074315937s)
--- PASS: TestKicStaticIP (23.48s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (52.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-299270 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-299270 --driver=docker  --container-runtime=docker: (22.080180025s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-314165 --driver=docker  --container-runtime=docker
E0929 11:44:20.593994  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-314165 --driver=docker  --container-runtime=docker: (24.731457528s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-299270
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-314165
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-314165" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-314165
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-314165: (2.120186112s)
helpers_test.go:175: Cleaning up "first-299270" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-299270
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-299270: (2.143531728s)
--- PASS: TestMinikubeProfile (52.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (7.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-733362 --memory=3072 --mount-string /tmp/TestMountStartserial3749893859/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-733362 --memory=3072 --mount-string /tmp/TestMountStartserial3749893859/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.587006638s)
--- PASS: TestMountStart/serial/StartWithMountFirst (7.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-733362 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.25s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-747755 --memory=3072 --mount-string /tmp/TestMountStartserial3749893859/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-747755 --memory=3072 --mount-string /tmp/TestMountStartserial3749893859/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (6.506535908s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.51s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747755 ssh -- ls /minikube-host
E0929 11:44:53.819604  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMountStart/serial/VerifyMountSecond (0.25s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.5s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-733362 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-733362 --alsologtostderr -v=5: (1.499925492s)
--- PASS: TestMountStart/serial/DeleteFirst (1.50s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747755 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.25s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.18s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-747755
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-747755: (1.175622086s)
--- PASS: TestMountStart/serial/Stop (1.18s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.3s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-747755
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-747755: (7.295919431s)
--- PASS: TestMountStart/serial/RestartStopped (8.30s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-747755 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.25s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (44.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-164285 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-164285 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (44.286476545s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (44.76s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39.74s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-164285 -- rollout status deployment/busybox: (2.915859306s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:45:54.970627  360782 retry.go:31] will retry after 1.191363099s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:45:56.275808  360782 retry.go:31] will retry after 1.961131005s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:45:58.349337  360782 retry.go:31] will retry after 3.316497381s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:46:01.783500  360782 retry.go:31] will retry after 3.229690236s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:46:05.127065  360782 retry.go:31] will retry after 5.294207584s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:46:10.539376  360782 retry.go:31] will retry after 6.227507313s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 11:46:16.882423  360782 retry.go:31] will retry after 13.351740632s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-b4bz6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-g6hbk -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-b4bz6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-g6hbk -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-b4bz6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-g6hbk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.74s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-b4bz6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-b4bz6 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-g6hbk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-164285 -- exec busybox-7b57f96db7-g6hbk -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.79s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (14.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-164285 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-164285 -v=5 --alsologtostderr: (13.836326234s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (14.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-164285 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp testdata/cp-test.txt multinode-164285:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2313590246/001/cp-test_multinode-164285.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285:/home/docker/cp-test.txt multinode-164285-m02:/home/docker/cp-test_multinode-164285_multinode-164285-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m02 "sudo cat /home/docker/cp-test_multinode-164285_multinode-164285-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285:/home/docker/cp-test.txt multinode-164285-m03:/home/docker/cp-test_multinode-164285_multinode-164285-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m03 "sudo cat /home/docker/cp-test_multinode-164285_multinode-164285-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp testdata/cp-test.txt multinode-164285-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2313590246/001/cp-test_multinode-164285-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285-m02:/home/docker/cp-test.txt multinode-164285:/home/docker/cp-test_multinode-164285-m02_multinode-164285.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285 "sudo cat /home/docker/cp-test_multinode-164285-m02_multinode-164285.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285-m02:/home/docker/cp-test.txt multinode-164285-m03:/home/docker/cp-test_multinode-164285-m02_multinode-164285-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m03 "sudo cat /home/docker/cp-test_multinode-164285-m02_multinode-164285-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp testdata/cp-test.txt multinode-164285-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2313590246/001/cp-test_multinode-164285-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285-m03:/home/docker/cp-test.txt multinode-164285:/home/docker/cp-test_multinode-164285-m03_multinode-164285.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285 "sudo cat /home/docker/cp-test_multinode-164285-m03_multinode-164285.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 cp multinode-164285-m03:/home/docker/cp-test.txt multinode-164285-m02:/home/docker/cp-test_multinode-164285-m03_multinode-164285-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 ssh -n multinode-164285-m02 "sudo cat /home/docker/cp-test_multinode-164285-m03_multinode-164285-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.69s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-164285 node stop m03: (1.22059921s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-164285 status: exit status 7 (473.172441ms)

                                                
                                                
-- stdout --
	multinode-164285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-164285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-164285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr: exit status 7 (477.034109ms)

                                                
                                                
-- stdout --
	multinode-164285
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-164285-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-164285-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:46:58.995943  562731 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:46:58.996069  562731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:46:58.996082  562731 out.go:374] Setting ErrFile to fd 2...
	I0929 11:46:58.996088  562731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:46:58.996323  562731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:46:58.996532  562731 out.go:368] Setting JSON to false
	I0929 11:46:58.996570  562731 mustload.go:65] Loading cluster: multinode-164285
	I0929 11:46:58.996677  562731 notify.go:220] Checking for updates...
	I0929 11:46:58.997091  562731 config.go:182] Loaded profile config "multinode-164285": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:46:58.997119  562731 status.go:174] checking status of multinode-164285 ...
	I0929 11:46:58.997637  562731 cli_runner.go:164] Run: docker container inspect multinode-164285 --format={{.State.Status}}
	I0929 11:46:59.016803  562731 status.go:371] multinode-164285 host status = "Running" (err=<nil>)
	I0929 11:46:59.016831  562731 host.go:66] Checking if "multinode-164285" exists ...
	I0929 11:46:59.017108  562731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-164285
	I0929 11:46:59.034988  562731 host.go:66] Checking if "multinode-164285" exists ...
	I0929 11:46:59.035255  562731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:46:59.035312  562731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-164285
	I0929 11:46:59.053259  562731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/multinode-164285/id_rsa Username:docker}
	I0929 11:46:59.147151  562731 ssh_runner.go:195] Run: systemctl --version
	I0929 11:46:59.151433  562731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:46:59.162787  562731 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 11:46:59.217954  562731 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-29 11:46:59.208471428 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1040-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652178944 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:b98a3aace656320842a23f4a392a33f46af97866 Expected:} RuncCommit:{ID:v1.3.0-0-g4ca628d1 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v0.1.40] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0929 11:46:59.218509  562731 kubeconfig.go:125] found "multinode-164285" server: "https://192.168.67.2:8443"
	I0929 11:46:59.218545  562731 api_server.go:166] Checking apiserver status ...
	I0929 11:46:59.218589  562731 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 11:46:59.230525  562731 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2215/cgroup
	W0929 11:46:59.240100  562731 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2215/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0929 11:46:59.240153  562731 ssh_runner.go:195] Run: ls
	I0929 11:46:59.243487  562731 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 11:46:59.247505  562731 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 11:46:59.247527  562731 status.go:463] multinode-164285 apiserver status = Running (err=<nil>)
	I0929 11:46:59.247537  562731 status.go:176] multinode-164285 status: &{Name:multinode-164285 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:46:59.247567  562731 status.go:174] checking status of multinode-164285-m02 ...
	I0929 11:46:59.247792  562731 cli_runner.go:164] Run: docker container inspect multinode-164285-m02 --format={{.State.Status}}
	I0929 11:46:59.265189  562731 status.go:371] multinode-164285-m02 host status = "Running" (err=<nil>)
	I0929 11:46:59.265212  562731 host.go:66] Checking if "multinode-164285-m02" exists ...
	I0929 11:46:59.265443  562731 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-164285-m02
	I0929 11:46:59.283319  562731 host.go:66] Checking if "multinode-164285-m02" exists ...
	I0929 11:46:59.283647  562731 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 11:46:59.283701  562731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-164285-m02
	I0929 11:46:59.300503  562731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/21655-357219/.minikube/machines/multinode-164285-m02/id_rsa Username:docker}
	I0929 11:46:59.393161  562731 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 11:46:59.404726  562731 status.go:176] multinode-164285-m02 status: &{Name:multinode-164285-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:46:59.404769  562731 status.go:174] checking status of multinode-164285-m03 ...
	I0929 11:46:59.405078  562731 cli_runner.go:164] Run: docker container inspect multinode-164285-m03 --format={{.State.Status}}
	I0929 11:46:59.424468  562731 status.go:371] multinode-164285-m03 host status = "Stopped" (err=<nil>)
	I0929 11:46:59.424491  562731 status.go:384] host is not running, skipping remaining checks
	I0929 11:46:59.424497  562731 status.go:176] multinode-164285-m03 status: &{Name:multinode-164285-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.17s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (8.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-164285 node start m03 -v=5 --alsologtostderr: (8.024499974s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (8.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (70.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-164285
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-164285
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-164285: (22.545462888s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-164285 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-164285 --wait=true -v=5 --alsologtostderr: (47.889243504s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-164285
--- PASS: TestMultiNode/serial/RestartKeepsNodes (70.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-164285 node delete m03: (4.589389177s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.17s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-164285 stop: (21.460235751s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-164285 status: exit status 7 (89.748104ms)

                                                
                                                
-- stdout --
	multinode-164285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-164285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr: exit status 7 (87.710547ms)

                                                
                                                
-- stdout --
	multinode-164285
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-164285-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 11:48:45.435593  577057 out.go:360] Setting OutFile to fd 1 ...
	I0929 11:48:45.435745  577057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:48:45.435755  577057 out.go:374] Setting ErrFile to fd 2...
	I0929 11:48:45.435759  577057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 11:48:45.435975  577057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21655-357219/.minikube/bin
	I0929 11:48:45.436194  577057 out.go:368] Setting JSON to false
	I0929 11:48:45.436232  577057 mustload.go:65] Loading cluster: multinode-164285
	I0929 11:48:45.436358  577057 notify.go:220] Checking for updates...
	I0929 11:48:45.436718  577057 config.go:182] Loaded profile config "multinode-164285": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 11:48:45.436745  577057 status.go:174] checking status of multinode-164285 ...
	I0929 11:48:45.437367  577057 cli_runner.go:164] Run: docker container inspect multinode-164285 --format={{.State.Status}}
	I0929 11:48:45.456744  577057 status.go:371] multinode-164285 host status = "Stopped" (err=<nil>)
	I0929 11:48:45.456774  577057 status.go:384] host is not running, skipping remaining checks
	I0929 11:48:45.456785  577057 status.go:176] multinode-164285 status: &{Name:multinode-164285 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 11:48:45.456841  577057 status.go:174] checking status of multinode-164285-m02 ...
	I0929 11:48:45.457222  577057 cli_runner.go:164] Run: docker container inspect multinode-164285-m02 --format={{.State.Status}}
	I0929 11:48:45.475887  577057 status.go:371] multinode-164285-m02 host status = "Stopped" (err=<nil>)
	I0929 11:48:45.475917  577057 status.go:384] host is not running, skipping remaining checks
	I0929 11:48:45.475926  577057 status.go:176] multinode-164285-m02 status: &{Name:multinode-164285-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.64s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-164285 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0929 11:49:20.588523  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-164285 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (47.462563891s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-164285 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.05s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (24.76s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-164285
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-164285-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-164285-m02 --driver=docker  --container-runtime=docker: exit status 14 (66.384052ms)

                                                
                                                
-- stdout --
	* [multinode-164285-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-164285-m02' is duplicated with machine name 'multinode-164285-m02' in profile 'multinode-164285'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-164285-m03 --driver=docker  --container-runtime=docker
E0929 11:49:53.820032  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-164285-m03 --driver=docker  --container-runtime=docker: (22.256042716s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-164285
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-164285: exit status 80 (277.937687ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-164285 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-164285-m03 already exists in multinode-164285-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-164285-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-164285-m03: (2.115231529s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (24.76s)

                                                
                                    
x
+
TestPreload (129.4s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-108539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-108539 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m8.877430095s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-108539 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-108539 image pull gcr.io/k8s-minikube/busybox: (1.574839909s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-108539
E0929 11:51:16.888259  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-108539: (5.650515284s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-108539 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-108539 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (50.883397764s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-108539 image list
helpers_test.go:175: Cleaning up "test-preload-108539" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-108539
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-108539: (2.201819796s)
--- PASS: TestPreload (129.40s)

                                                
                                    
x
+
TestScheduledStopUnix (95.97s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-982557 --memory=3072 --driver=docker  --container-runtime=docker
E0929 11:52:23.665786  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-982557 --memory=3072 --driver=docker  --container-runtime=docker: (22.907377185s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-982557 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-982557 -n scheduled-stop-982557
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-982557 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 11:52:34.949594  360782 retry.go:31] will retry after 83.294µs: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.950753  360782 retry.go:31] will retry after 187.58µs: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.951890  360782 retry.go:31] will retry after 161.452µs: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.953020  360782 retry.go:31] will retry after 305.871µs: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.954142  360782 retry.go:31] will retry after 638.702µs: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.955255  360782 retry.go:31] will retry after 702.476µs: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.956375  360782 retry.go:31] will retry after 1.450791ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.958591  360782 retry.go:31] will retry after 1.890901ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.960731  360782 retry.go:31] will retry after 3.465776ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.964937  360782 retry.go:31] will retry after 3.939075ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.969134  360782 retry.go:31] will retry after 4.808789ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.974378  360782 retry.go:31] will retry after 12.92145ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.987633  360782 retry.go:31] will retry after 8.598558ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:34.996899  360782 retry.go:31] will retry after 15.961703ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:35.013151  360782 retry.go:31] will retry after 19.749429ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
I0929 11:52:35.033420  360782 retry.go:31] will retry after 40.682782ms: open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/scheduled-stop-982557/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-982557 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-982557 -n scheduled-stop-982557
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-982557
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-982557 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-982557
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-982557: exit status 7 (68.326328ms)

                                                
                                                
-- stdout --
	scheduled-stop-982557
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-982557 -n scheduled-stop-982557
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-982557 -n scheduled-stop-982557: exit status 7 (68.937998ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-982557" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-982557
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-982557: (1.655550986s)
--- PASS: TestScheduledStopUnix (95.97s)

                                                
                                    
x
+
TestSkaffold (74.77s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1220507843 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-382871 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-382871 --memory=3072 --driver=docker  --container-runtime=docker: (23.210844623s)
skaffold_test.go:86: copying out/minikube-linux-amd64 to /home/jenkins/workspace/Docker_Linux_integration/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1220507843 run --minikube-profile skaffold-382871 --kube-context skaffold-382871 --status-check=true --port-forward=false --interactive=false
E0929 11:54:20.588157  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1220507843 run --minikube-profile skaffold-382871 --kube-context skaffold-382871 --status-check=true --port-forward=false --interactive=false: (36.6175555s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-69bc849bbb-55ztb" [9086f599-5a2a-42d2-9ae5-f1a120002b01] Running
E0929 11:54:53.819480  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003714563s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-9864dc56b-4w7v5" [90e8416d-2fc9-4ade-b722-ca8ee6039eef] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003566794s
helpers_test.go:175: Cleaning up "skaffold-382871" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-382871
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-382871: (3.064473749s)
--- PASS: TestSkaffold (74.77s)

                                                
                                    
x
+
TestInsufficientStorage (9.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-760685 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-760685 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (7.545872494s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"12ca5a2b-c7ae-439a-8067-108fd4c5b3b1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-760685] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"7fbe2312-5ed8-4ae6-8288-57a8410c52a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21655"}}
	{"specversion":"1.0","id":"a66f4d29-83c1-47bc-b846-56cf47526438","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5631181c-9a34-4b7a-a952-2ad2ec44d8f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig"}}
	{"specversion":"1.0","id":"907765fa-b0db-4f78-ba5b-38405c0c5671","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube"}}
	{"specversion":"1.0","id":"a3093448-bd4c-4efb-b71b-72b518401d68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"dff9c019-1c46-485d-a92c-4937b1807a18","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"3a3c553f-3a75-428e-960d-fa86246b02f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"47952559-9fbe-46da-9685-970463f46b88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"ad59b61c-5bad-4e91-a764-f0143fe3bb22","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a758e46b-6dfe-43ee-bf18-84238d28c2a5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c6677382-6d7e-45be-9ea0-93e8804f2864","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-760685\" primary control-plane node in \"insufficient-storage-760685\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a037f7e-c3cf-4e73-947b-facdaea7e60b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"56bdad11-5f77-4a9b-be5b-c2d28d21fd65","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"7dc08aff-b9ac-4446-a705-111885e41bbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-760685 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-760685 --output=json --layout=cluster: exit status 7 (267.788842ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-760685","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-760685","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:55:10.178313  615166 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-760685" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-760685 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-760685 --output=json --layout=cluster: exit status 7 (276.407627ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-760685","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-760685","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 11:55:10.454779  615267 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-760685" does not appear in /home/jenkins/minikube-integration/21655-357219/kubeconfig
	E0929 11:55:10.466405  615267 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/insufficient-storage-760685/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-760685" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-760685
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-760685: (1.662867612s)
--- PASS: TestInsufficientStorage (9.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (46.77s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1322779182 start -p running-upgrade-615723 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1322779182 start -p running-upgrade-615723 --memory=3072 --vm-driver=docker  --container-runtime=docker: (21.196101398s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-615723 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-615723 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (22.945288132s)
helpers_test.go:175: Cleaning up "running-upgrade-615723" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-615723
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-615723: (2.156297931s)
--- PASS: TestRunningBinaryUpgrade (46.77s)

                                                
                                    
x
+
TestKubernetesUpgrade (347.48s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (28.007257595s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-695405
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-695405: (10.736027665s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-695405 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-695405 status --format={{.Host}}: exit status 7 (72.359967ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m30.061665666s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-695405 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (69.262828ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-695405] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-695405
	    minikube start -p kubernetes-upgrade-695405 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6954052 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-695405 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-695405 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (35.969042379s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-695405" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-695405
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-695405: (2.496191912s)
--- PASS: TestKubernetesUpgrade (347.48s)

                                                
                                    
x
+
TestMissingContainerUpgrade (95.91s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.426727756 start -p missing-upgrade-670375 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.426727756 start -p missing-upgrade-670375 --memory=3072 --driver=docker  --container-runtime=docker: (40.377247524s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-670375
I0929 11:57:05.880739  360782 install.go:138] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate3258845960/001:/home/jenkins/workspace/Docker_Linux_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0929 11:57:05.903412  360782 install.go:163] /tmp/TestKVMDriverInstallOrUpdate3258845960/001/docker-machine-driver-kvm2 version is 1.37.0
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-670375: (10.66434891s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-670375
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-670375 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-670375 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (42.079382793s)
helpers_test.go:175: Cleaning up "missing-upgrade-670375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-670375
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-670375: (2.185952076s)
--- PASS: TestMissingContainerUpgrade (95.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-997583 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-997583 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (75.704916ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-997583] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21655
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21655-357219/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21655-357219/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (43.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-997583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-997583 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (43.263823752s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-997583 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (43.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (68.16s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2244596898 start -p stopped-upgrade-019011 --memory=3072 --vm-driver=docker  --container-runtime=docker
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2244596898 start -p stopped-upgrade-019011 --memory=3072 --vm-driver=docker  --container-runtime=docker: (40.215776494s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2244596898 -p stopped-upgrade-019011 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2244596898 -p stopped-upgrade-019011 stop: (10.760541555s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-019011 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-019011 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (17.180872276s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (68.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (17.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-997583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-997583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (15.021091363s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-997583 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-997583 status -o json: exit status 2 (352.463129ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-997583","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-997583
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-997583: (1.937120266s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (17.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.11s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-997583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-997583 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (7.106492299s)
--- PASS: TestNoKubernetes/serial/Start (7.11s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-997583 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-997583 "sudo systemctl is-active --quiet service kubelet": exit status 1 (315.334941ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (4.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:181: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (3.693378973s)
--- PASS: TestNoKubernetes/serial/ProfileList (4.47s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-019011
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-997583
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-997583: (1.22145761s)
--- PASS: TestNoKubernetes/serial/Stop (1.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-997583 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-997583 --driver=docker  --container-runtime=docker: (8.477453271s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-997583 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-997583 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.344297ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/Start (60.78s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-191967 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-191967 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m0.779900887s)
--- PASS: TestPause/serial/Start (60.78s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (51.43s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-191967 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-191967 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.412040226s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (51.43s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-191967 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.31s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-191967 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-191967 --output=json --layout=cluster: exit status 2 (310.152171ms)

                                                
                                                
-- stdout --
	{"Name":"pause-191967","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-191967","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.31s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.47s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-191967 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.47s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.54s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-191967 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.54s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.23s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-191967 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-191967 --alsologtostderr -v=5: (2.226718803s)
--- PASS: TestPause/serial/DeletePaused (2.23s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-191967
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-191967: exit status 1 (17.97242ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-191967: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (42.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (42.790345891s)
--- PASS: TestNetworkPlugins/group/auto/Start (42.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (57.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
E0929 11:59:20.588571  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (57.434048174s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (57.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-934155 "pgrep -a kubelet"
I0929 11:59:46.169737  360782 config.go:182] Loaded profile config "auto-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-pcjzl" [be32e65d-9e1c-4b52-9fe7-54a18a060537] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 11:59:48.289016  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:48.295553  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:48.306972  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:48.328353  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:48.369785  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:48.451717  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:48.613384  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:48.935097  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-pcjzl" [be32e65d-9e1c-4b52-9fe7-54a18a060537] Running
E0929 11:59:50.859004  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:53.421111  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 11:59:53.818988  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004029382s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ppfgz" [881d5ec9-b503-45b2-934a-33331e0aa126] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004114868s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-934155 "pgrep -a kubelet"
I0929 12:00:22.076001  360782 config.go:182] Loaded profile config "kindnet-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-s4gck" [345d8d44-3760-4f05-94db-4b3a4e976f31] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-s4gck" [345d8d44-3760-4f05-94db-4b3a4e976f31] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004024934s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (48.66s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
E0929 12:00:29.266835  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (48.65878166s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (48.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (69.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p false-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0929 12:01:10.228765  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p false-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m9.260003805s)
--- PASS: TestNetworkPlugins/group/false/Start (69.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-934155 "pgrep -a kubelet"
I0929 12:01:17.113502  360782 config.go:182] Loaded profile config "custom-flannel-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k2df6" [8b1f6f18-4ddf-47f1-9594-24daee1a26c6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-k2df6" [8b1f6f18-4ddf-47f1-9594-24daee1a26c6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.005794687s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (61.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m1.364806795s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (61.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-934155 "pgrep -a kubelet"
I0929 12:02:02.824053  360782 config.go:182] Loaded profile config "false-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-wmrbx" [4a7b427b-a957-4edb-b8f8-a4d7c734edf7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-wmrbx" [4a7b427b-a957-4edb-b8f8-a4d7c734edf7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.003320837s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (115.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (1m55.233122633s)
--- PASS: TestNetworkPlugins/group/flannel/Start (115.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m4.772871928s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-934155 "pgrep -a kubelet"
I0929 12:02:50.981039  360782 config.go:182] Loaded profile config "enable-default-cni-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5hb9n" [74630749-c55c-4759-8f21-561d52e60de7] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5hb9n" [74630749-c55c-4759-8f21-561d52e60de7] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.006439479s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (63.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-934155 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m3.72204265s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (63.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-934155 "pgrep -a kubelet"
I0929 12:03:38.588317  360782 config.go:182] Loaded profile config "bridge-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7pvvb" [51a83950-e9b9-4aeb-bf6b-129977161220] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7pvvb" [51a83950-e9b9-4aeb-bf6b-129977161220] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003767075s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (38.83s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (38.825486481s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (38.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-b7bzc" [d83fb05e-49d8-4895-9c63-6348b9811e65] Running
E0929 12:04:20.588295  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003446154s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-934155 "pgrep -a kubelet"
I0929 12:04:25.075360  360782 config.go:182] Loaded profile config "kubenet-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-66dxm" [2392aa8f-baee-49bf-8e59-b0cbabd85c7f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-66dxm" [2392aa8f-baee-49bf-8e59-b0cbabd85c7f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.004059748s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-934155 "pgrep -a kubelet"
I0929 12:04:26.618318  360782 config.go:182] Loaded profile config "flannel-934155": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-934155 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gr5j7" [4e16ea32-1685-4b5e-bbe1-ccb31dd63400] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gr5j7" [4e16ea32-1685-4b5e-bbe1-ccb31dd63400] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.004160245s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-934155 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-934155 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.37s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-858855 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [60c5541f-4f85-419f-bc14-20bedfc90692] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0929 12:04:47.703691  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [60c5541f-4f85-419f-bc14-20bedfc90692] Running
E0929 12:04:51.548095  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:04:53.818895  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.004950642s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-858855 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.37s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-858855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0929 12:04:56.670043  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-858855 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.103456666s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-858855 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (66.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m6.562823614s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (66.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-858855 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-858855 --alsologtostderr -v=3: (12.309398639s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.88s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m14.880274061s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (39.510388235s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.51s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858855 -n old-k8s-version-858855
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858855 -n old-k8s-version-858855: exit status 7 (78.305316ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-858855 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (50.77s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0929 12:05:15.769092  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:15.775487  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:15.786936  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:15.808363  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:15.850228  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:15.931794  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:15.992722  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:16.093677  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:16.414968  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:17.056677  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:18.338059  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:20.900290  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:26.022647  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:27.392993  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:05:36.264937  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-858855 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (50.453470867s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-858855 -n old-k8s-version-858855
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (50.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [600e1fc3-6e66-4d30-8b65-a4c7b07b7fa0] Pending
helpers_test.go:352: "busybox" [600e1fc3-6e66-4d30-8b65-a4c7b07b7fa0] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [600e1fc3-6e66-4d30-8b65-a4c7b07b7fa0] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003468913s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.27s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-414542 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-414542 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (10.8s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-414542 --alsologtostderr -v=3
E0929 12:05:56.746682  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-414542 --alsologtostderr -v=3: (10.798194037s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (10.80s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-031687 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c01f5e78-1992-4d9c-9731-a6d45b4c8fdb] Pending
helpers_test.go:352: "busybox" [c01f5e78-1992-4d9c-9731-a6d45b4c8fdb] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c01f5e78-1992-4d9c-9731-a6d45b4c8fdb] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.004368523s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-031687 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542: exit status 7 (66.989693ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-414542 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 12:06:08.354820  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-414542 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (51.399123848s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
E0929 12:06:58.305661  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (51.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-031687 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-031687 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-031687 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-031687 --alsologtostderr -v=3: (10.833763845s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (10.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-306088 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [dc41235b-744a-4b01-9e79-673aa10047fe] Pending
E0929 12:06:17.326822  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:17.333137  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:17.344610  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:17.366647  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:17.408101  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [dc41235b-744a-4b01-9e79-673aa10047fe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0929 12:06:17.490210  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:17.652486  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:17.974765  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:18.616718  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [dc41235b-744a-4b01-9e79-673aa10047fe] Running
E0929 12:06:19.898219  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:22.460021  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.004765102s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-306088 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031687 -n embed-certs-031687
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031687 -n embed-certs-031687: exit status 7 (72.020871ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-031687 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-031687 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (48.097542552s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-031687 -n embed-certs-031687
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-306088 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-306088 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.83s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-306088 --alsologtostderr -v=3
E0929 12:06:27.581781  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-306088 --alsologtostderr -v=3: (10.789657339s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-306088 -n no-preload-306088
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-306088 -n no-preload-306088: exit status 7 (83.93657ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-306088 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (46.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 12:06:37.708664  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kindnet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:06:37.824118  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/custom-flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-306088 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (46.213701298s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-306088 -n no-preload-306088
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (46.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-858855 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-858855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858855 -n old-k8s-version-858855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858855 -n old-k8s-version-858855: exit status 2 (307.245016ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-858855 -n old-k8s-version-858855
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-858855 -n old-k8s-version-858855: exit status 2 (305.522118ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-858855 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-858855 -n old-k8s-version-858855
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-858855 -n old-k8s-version-858855
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 12:24:14.228831  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/enable-default-cni-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:20.245673  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/flannel-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:20.588153  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/addons-323939/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:25.294479  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubenet-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:36.891421  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (28.230897763s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-979136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-979136 --alsologtostderr -v=3
E0929 12:24:46.414779  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/auto-934155/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.357422  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.363855  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.375305  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.396768  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.438067  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.519506  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:47.681263  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:48.003293  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:48.289009  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/skaffold-382871/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:48.644722  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:49.926763  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-979136 --alsologtostderr -v=3: (10.792624148s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.79s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-979136 -n newest-cni-979136
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-979136 -n newest-cni-979136: exit status 7 (69.352576ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-979136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 12:24:52.488987  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:53.818931  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/functional-113333/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 12:24:57.610297  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-979136 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (12.627944227s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-979136 -n newest-cni-979136
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.95s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-414542 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-979136 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-414542 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542: exit status 2 (331.514538ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542: exit status 2 (321.99639ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-414542 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-414542 -n default-k8s-diff-port-414542
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-979136 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-979136 -n newest-cni-979136
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-979136 -n newest-cni-979136: exit status 2 (313.582613ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-979136 -n newest-cni-979136
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-979136 -n newest-cni-979136: exit status 2 (314.312845ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-979136 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-979136 -n newest-cni-979136
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-979136 -n newest-cni-979136
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-031687 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-031687 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031687 -n embed-certs-031687
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031687 -n embed-certs-031687: exit status 2 (298.333353ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-031687 -n embed-certs-031687
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-031687 -n embed-certs-031687: exit status 2 (291.483727ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-031687 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-031687 -n embed-certs-031687
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-031687 -n embed-certs-031687
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-306088 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-306088 --alsologtostderr -v=1
E0929 12:25:28.333336  360782 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/old-k8s-version-858855/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-306088 -n no-preload-306088
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-306088 -n no-preload-306088: exit status 2 (289.446757ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-306088 -n no-preload-306088
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-306088 -n no-preload-306088: exit status 2 (292.886261ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-306088 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-306088 -n no-preload-306088
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-306088 -n no-preload-306088
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.18s)

                                                
                                    

Test skip (22/341)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-934155 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-934155" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:56:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-788277
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:57:29 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: kubernetes-upgrade-695405
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21655-357219/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:58:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: pause-191967
contexts:
- context:
cluster: cert-expiration-788277
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:56:48 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-788277
name: cert-expiration-788277
- context:
cluster: kubernetes-upgrade-695405
user: kubernetes-upgrade-695405
name: kubernetes-upgrade-695405
- context:
cluster: pause-191967
extensions:
- extension:
last-update: Mon, 29 Sep 2025 11:58:37 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-191967
name: pause-191967
current-context: ""
kind: Config
users:
- name: cert-expiration-788277
user:
client-certificate: /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/cert-expiration-788277/client.crt
client-key: /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/cert-expiration-788277/client.key
- name: kubernetes-upgrade-695405
user:
client-certificate: /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubernetes-upgrade-695405/client.crt
client-key: /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/kubernetes-upgrade-695405/client.key
- name: pause-191967
user:
client-certificate: /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/pause-191967/client.crt
client-key: /home/jenkins/minikube-integration/21655-357219/.minikube/profiles/pause-191967/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-934155

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-934155" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-934155"

                                                
                                                
----------------------- debugLogs end: cilium-934155 [took: 3.19203045s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-934155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-934155
--- SKIP: TestNetworkPlugins/group/cilium (3.34s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-929504" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-929504
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard