Test Report: Docker_Linux_docker_arm64 21652

                    
                      b9467c4b05d043dd40c691e5c40c4e59f96d3adc:2025-09-29:41683
                    
                

Test fail (12/341)

x
+
TestFunctional/parallel/DashboardCmd (302.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-085003 --alsologtostderr -v=1] stderr:
I0929 13:13:20.563487 1170270 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:20.564762 1170270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:20.564779 1170270 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:20.564785 1170270 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:20.565064 1170270 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:20.565793 1170270 mustload.go:65] Loading cluster: functional-085003
I0929 13:13:20.566245 1170270 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:20.566706 1170270 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:20.590590 1170270 host.go:66] Checking if "functional-085003" exists ...
I0929 13:13:20.590919 1170270 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I0929 13:13:20.691970 1170270 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 13:13:20.680915817 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
I0929 13:13:20.692084 1170270 api_server.go:166] Checking apiserver status ...
I0929 13:13:20.692164 1170270 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I0929 13:13:20.692207 1170270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:20.712368 1170270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:20.831799 1170270 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/9055/cgroup
I0929 13:13:20.845448 1170270 api_server.go:182] apiserver freezer: "12:freezer:/docker/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/kubepods/burstable/pod8389b3c5071f04a90f8b816ba5cbd99d/ae17de939d81bee1c5af086b9803f2b620513d471f7d0231817d65c9042e89d6"
I0929 13:13:20.845522 1170270 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/kubepods/burstable/pod8389b3c5071f04a90f8b816ba5cbd99d/ae17de939d81bee1c5af086b9803f2b620513d471f7d0231817d65c9042e89d6/freezer.state
I0929 13:13:20.855580 1170270 api_server.go:204] freezer state: "THAWED"
I0929 13:13:20.855609 1170270 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8441/healthz ...
I0929 13:13:20.864320 1170270 api_server.go:279] https://192.168.49.2:8441/healthz returned 200:
ok
W0929 13:13:20.864355 1170270 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I0929 13:13:20.864597 1170270 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:20.864620 1170270 addons.go:69] Setting dashboard=true in profile "functional-085003"
I0929 13:13:20.864628 1170270 addons.go:238] Setting addon dashboard=true in "functional-085003"
I0929 13:13:20.864663 1170270 host.go:66] Checking if "functional-085003" exists ...
I0929 13:13:20.865080 1170270 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:20.905505 1170270 out.go:179]   - Using image docker.io/kubernetesui/metrics-scraper:v1.0.8
I0929 13:13:20.908609 1170270 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
I0929 13:13:20.911796 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
I0929 13:13:20.911822 1170270 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
I0929 13:13:20.911939 1170270 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:20.947040 1170270 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:21.060083 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
I0929 13:13:21.060117 1170270 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
I0929 13:13:21.080259 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
I0929 13:13:21.080286 1170270 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
I0929 13:13:21.099856 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
I0929 13:13:21.099878 1170270 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
I0929 13:13:21.120878 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
I0929 13:13:21.120902 1170270 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes)
I0929 13:13:21.142168 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
I0929 13:13:21.142190 1170270 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
I0929 13:13:21.162607 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
I0929 13:13:21.162630 1170270 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
I0929 13:13:21.181380 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
I0929 13:13:21.181402 1170270 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
I0929 13:13:21.201688 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
I0929 13:13:21.201748 1170270 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
I0929 13:13:21.222256 1170270 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
I0929 13:13:21.222278 1170270 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
I0929 13:13:21.240849 1170270 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
I0929 13:13:22.129460 1170270 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-085003 addons enable metrics-server

                                                
                                                
I0929 13:13:22.132627 1170270 addons.go:201] Writing out "functional-085003" config to set dashboard=true...
W0929 13:13:22.132942 1170270 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I0929 13:13:22.133647 1170270 kapi.go:59] client config for functional-085003: &rest.Config{Host:"https://192.168.49.2:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I0929 13:13:22.134227 1170270 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0929 13:13:22.134267 1170270 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0929 13:13:22.134289 1170270 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0929 13:13:22.134310 1170270 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0929 13:13:22.134333 1170270 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0929 13:13:22.153493 1170270 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard  kubernetes-dashboard  5b79f336-b7e3-42a9-981b-c53f638bdbb5 953 0 2025-09-29 13:13:22 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:Reconcile k8s-app:kubernetes-dashboard kubernetes.io/minikube-addons:dashboard] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kubernetes-dashboard","kubernetes.io/minikube-addons":"dashboard"},"name":"kubernetes-dashboard","namespace":"kubernetes-dashboard"},"spec":{"ports":[{"port":80,"targetPort":9090}],"selector":{"k8s-app":"kubernetes-dashboard"}}}
] [] [] [{kubectl-client-side-apply Update v1 2025-09-29 13:13:22 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{},"f:k8s-app":{},"f:kubernetes.io/minikube-addons":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:,Protocol:TCP,Port:80,TargetPort:{0 9090 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kubernetes-dashboard,},ClusterIP:10.110.146.72,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.110.146.72],IPFamilies:[IPv4],AllocateLoadBalance
rNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
W0929 13:13:22.153674 1170270 out.go:285] * Launching proxy ...
* Launching proxy ...
I0929 13:13:22.153771 1170270 dashboard.go:152] Executing: /usr/local/bin/kubectl [/usr/local/bin/kubectl --context functional-085003 proxy --port 36195]
I0929 13:13:22.155729 1170270 dashboard.go:157] Waiting for kubectl to output host:port ...
I0929 13:13:22.221568 1170270 dashboard.go:175] proxy stdout: Starting to serve on 127.0.0.1:36195
W0929 13:13:22.221635 1170270 out.go:285] * Verifying proxy health ...
* Verifying proxy health ...
I0929 13:13:22.238543 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4bfa4ba5-e1e5-4b87-a05c-9e8076d87fde] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718140 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab2c0 TLS:<nil>}
I0929 13:13:22.238623 1170270 retry.go:31] will retry after 88.427µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.242615 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8bb2dd3c-87b4-44d2-b75b-d2a9d0389552] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007181c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab400 TLS:<nil>}
I0929 13:13:22.242681 1170270 retry.go:31] will retry after 134.514µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.246453 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[436ed7cd-4735-49e0-bdc6-4d70346c76d2] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cd800 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454140 TLS:<nil>}
I0929 13:13:22.246533 1170270 retry.go:31] will retry after 325.002µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.250391 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9b2bde31-3b8a-418d-b519-8f0e388bb30e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007182c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab540 TLS:<nil>}
I0929 13:13:22.250449 1170270 retry.go:31] will retry after 353.949µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.254480 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ac91bc3d-d7b6-431d-ab3e-e286da754239] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718340 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab680 TLS:<nil>}
I0929 13:13:22.254538 1170270 retry.go:31] will retry after 754.783µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.267754 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[47af96ca-2d8c-4d19-b362-b6432294b917] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cda80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454280 TLS:<nil>}
I0929 13:13:22.267821 1170270 retry.go:31] will retry after 623.521µs: Temporary Error: unexpected response code: 503
I0929 13:13:22.271897 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43cb4fac-18a1-4e1f-b27e-f449ebe50f8e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdb40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab7c0 TLS:<nil>}
I0929 13:13:22.271974 1170270 retry.go:31] will retry after 1.382322ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.277089 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[43d86896-56f5-49c5-a5bc-1139a45fba20] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdbc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004543c0 TLS:<nil>}
I0929 13:13:22.277151 1170270 retry.go:31] will retry after 1.855482ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.282312 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[36371e1b-beaf-43b9-88ac-1790e7ae7f11] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdc40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454500 TLS:<nil>}
I0929 13:13:22.282373 1170270 retry.go:31] will retry after 3.317696ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.289526 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[4fa9d49e-3ea2-4e90-9d8d-7b99cbd0f905] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718600 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454640 TLS:<nil>}
I0929 13:13:22.289589 1170270 retry.go:31] will retry after 3.902596ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.296956 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[cf339ed6-f2c7-4bcf-8512-8814dbd075bf] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718700 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004548c0 TLS:<nil>}
I0929 13:13:22.297016 1170270 retry.go:31] will retry after 4.191099ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.305277 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b3a04725-4e7b-4bdc-b6ea-dfba4cb3fa5c] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007187c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454a00 TLS:<nil>}
I0929 13:13:22.305339 1170270 retry.go:31] will retry after 5.827675ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.315155 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[da90cb13-6087-469d-90a3-3d0d18ab0ea3] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdd00 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454b40 TLS:<nil>}
I0929 13:13:22.315223 1170270 retry.go:31] will retry after 14.413874ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.333512 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1fc68c17-6ff5-4e6e-bf17-47db06793907] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007cdd80 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454c80 TLS:<nil>}
I0929 13:13:22.333576 1170270 retry.go:31] will retry after 26.370673ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.363973 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[92b3622c-257a-4cc9-9d11-263ab4b7d521] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718940 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004ab900 TLS:<nil>}
I0929 13:13:22.364043 1170270 retry.go:31] will retry after 28.064363ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.395290 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[23c84596-4645-414c-95d4-0e591d798913] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x40007189c0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004aba40 TLS:<nil>}
I0929 13:13:22.395377 1170270 retry.go:31] will retry after 60.423299ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.459702 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9a340133-e6be-442e-bc2a-e07a14c97fb4] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4000718a40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abb80 TLS:<nil>}
I0929 13:13:22.459760 1170270 retry.go:31] will retry after 68.343407ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.532700 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d4d85f2-0988-4a5f-86f7-47cf9334c2d6] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4001584080 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454dc0 TLS:<nil>}
I0929 13:13:22.532764 1170270 retry.go:31] will retry after 115.64046ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.652124 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[8ef36cc1-4f0d-47a5-a271-0ca4aa3f1c0f] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4001584100 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455040 TLS:<nil>}
I0929 13:13:22.652188 1170270 retry.go:31] will retry after 175.809179ms: Temporary Error: unexpected response code: 503
I0929 13:13:22.831396 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[2de806e7-5392-42b2-afd1-b497ce07eff0] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:22 GMT]] Body:0x4001584180 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455180 TLS:<nil>}
I0929 13:13:22.831462 1170270 retry.go:31] will retry after 315.899093ms: Temporary Error: unexpected response code: 503
I0929 13:13:23.150959 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9fa70c79-c875-4bfd-b547-70dda0ddbb0d] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:23 GMT]] Body:0x4001584200 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004552c0 TLS:<nil>}
I0929 13:13:23.151024 1170270 retry.go:31] will retry after 256.825679ms: Temporary Error: unexpected response code: 503
I0929 13:13:23.411331 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9e964bf4-8914-4de0-8efa-9340d260270e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:23 GMT]] Body:0x4000718cc0 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abcc0 TLS:<nil>}
I0929 13:13:23.411393 1170270 retry.go:31] will retry after 627.646157ms: Temporary Error: unexpected response code: 503
I0929 13:13:24.042500 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b43d3ec7-3ef4-4e7b-ac41-afb3174302fb] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:24 GMT]] Body:0x4001584300 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455400 TLS:<nil>}
I0929 13:13:24.042573 1170270 retry.go:31] will retry after 484.981865ms: Temporary Error: unexpected response code: 503
I0929 13:13:24.530707 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[53788a14-ad24-472e-8051-b7bd1eb4083e] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:24 GMT]] Body:0x4001584380 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004abe00 TLS:<nil>}
I0929 13:13:24.530781 1170270 retry.go:31] will retry after 850.947667ms: Temporary Error: unexpected response code: 503
I0929 13:13:25.386151 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[80078a32-90ef-404e-93e9-77570fb205ec] Cache-Control:[no-cache, private] Content-Length:[182] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:25 GMT]] Body:0x4000718e40 ContentLength:182 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032b7c0 TLS:<nil>}
I0929 13:13:25.386218 1170270 retry.go:31] will retry after 1.588942309s: Temporary Error: unexpected response code: 503
I0929 13:13:26.985471 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[64684a67-6df1-4d8f-a4ff-e522a02ccce6] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:26 GMT]] Body:0x4000718f00 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455680 TLS:<nil>}
I0929 13:13:26.985536 1170270 retry.go:31] will retry after 2.823226816s: Temporary Error: unexpected response code: 503
I0929 13:13:29.811870 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[b7a09164-9f92-4e10-a110-b6fdf6b51a3a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:29 GMT]] Body:0x4000718f80 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032bb80 TLS:<nil>}
I0929 13:13:29.811931 1170270 retry.go:31] will retry after 2.893886865s: Temporary Error: unexpected response code: 503
I0929 13:13:32.709431 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[523890c4-4bd4-48a8-894c-2c931750e971] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:32 GMT]] Body:0x4000719000 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x400032bcc0 TLS:<nil>}
I0929 13:13:32.709488 1170270 retry.go:31] will retry after 8.373400345s: Temporary Error: unexpected response code: 503
I0929 13:13:41.086440 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[ed239d1f-fb20-4ca3-a0a0-9fffa2a27cf5] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:41 GMT]] Body:0x40015845c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x40004557c0 TLS:<nil>}
I0929 13:13:41.086522 1170270 retry.go:31] will retry after 7.261294639s: Temporary Error: unexpected response code: 503
I0929 13:13:48.353515 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[7c68d894-00c1-4f11-9f7d-68fb7ea96505] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:48 GMT]] Body:0x4001584680 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455900 TLS:<nil>}
I0929 13:13:48.353588 1170270 retry.go:31] will retry after 7.692540089s: Temporary Error: unexpected response code: 503
I0929 13:13:56.050316 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[6ef24314-7f58-4fc7-81fe-0642cceb0f6a] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:13:56 GMT]] Body:0x4001584740 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455a40 TLS:<nil>}
I0929 13:13:56.050376 1170270 retry.go:31] will retry after 15.886612511s: Temporary Error: unexpected response code: 503
I0929 13:14:11.940432 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[9cd8eb9d-35f7-407a-bfcf-dfbc25b6cd1d] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:14:11 GMT]] Body:0x4001584800 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000220000 TLS:<nil>}
I0929 13:14:11.940496 1170270 retry.go:31] will retry after 18.71422801s: Temporary Error: unexpected response code: 503
I0929 13:14:30.658462 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[5a077466-e64b-4f7c-a071-52c03354b87e] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:14:30 GMT]] Body:0x40015848c0 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455b80 TLS:<nil>}
I0929 13:14:30.658520 1170270 retry.go:31] will retry after 31.558868806s: Temporary Error: unexpected response code: 503
I0929 13:15:02.220947 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[a9724194-c546-4325-80de-981b54910d11] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:15:02 GMT]] Body:0x4001584980 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000455cc0 TLS:<nil>}
I0929 13:15:02.221012 1170270 retry.go:31] will retry after 50.020817285s: Temporary Error: unexpected response code: 503
I0929 13:15:52.244706 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[401f4503-2e61-4a1b-871c-51664e4d6281] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:15:52 GMT]] Body:0x4000718040 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000220140 TLS:<nil>}
I0929 13:15:52.244802 1170270 retry.go:31] will retry after 1m28.693408734s: Temporary Error: unexpected response code: 503
I0929 13:17:20.942432 1170270 dashboard.go:214] http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/ response: <nil> &{Status:503 Service Unavailable StatusCode:503 Proto:HTTP/1.1 ProtoMajor:1 ProtoMinor:1 Header:map[Audit-Id:[1d44e4a6-f29e-4191-874a-305a65a590f0] Cache-Control:[no-cache, private] Content-Length:[188] Content-Type:[application/json] Date:[Mon, 29 Sep 2025 13:17:20 GMT]] Body:0x4001584180 ContentLength:188 TransferEncoding:[] Close:false Uncompressed:false Trailer:map[] Request:0x4000454000 TLS:<nil>}
I0929 13:17:20.942664 1170270 retry.go:31] will retry after 1m22.712590338s: Temporary Error: unexpected response code: 503
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-085003
helpers_test.go:243: (dbg) docker inspect functional-085003:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199",
	        "Created": "2025-09-29T13:09:32.739049483Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1153948,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:09:32.807023038Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/hostname",
	        "HostsPath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/hosts",
	        "LogPath": "/var/lib/docker/containers/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199/808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199-json.log",
	        "Name": "/functional-085003",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-085003:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-085003",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "808859ee6cd90c0edf7ef87af5e3d7142ab71f43434ae365a1a794f1193cb199",
	                "LowerDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4351ef35506854cbb363c337eff050f44c53940225172eba186da1c8b60a4277/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-085003",
	                "Source": "/var/lib/docker/volumes/functional-085003/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-085003",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-085003",
	                "name.minikube.sigs.k8s.io": "functional-085003",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6dcbe98b19adcc45964d77ed66b84e986f77ac2325acbbf0dac3fa996b9c5a18",
	            "SandboxKey": "/var/run/docker/netns/6dcbe98b19ad",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33933"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33934"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33937"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33935"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33936"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-085003": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "9a:2a:36:ce:65:e0",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3ce27eabbb5598261257b94b8abdd2a97a18edc168a634dd1aca7dad29ec8ffe",
	                    "EndpointID": "06229491afe6d23fa4576a4176d09fc56361e77a853f195c0bd8feb4168ed161",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-085003",
	                        "808859ee6cd9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-085003 -n functional-085003
helpers_test.go:252: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p functional-085003 logs -n 25: (1.221381335s)
helpers_test.go:260: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-085003 ssh stat /mount-9p/created-by-pod                                                                               │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ ssh            │ functional-085003 ssh sudo umount -f /mount-9p                                                                                    │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ mount          │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdspecific-port2215760306/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ ssh            │ functional-085003 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ ssh            │ functional-085003 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ ssh            │ functional-085003 ssh -- ls -la /mount-9p                                                                                         │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ ssh            │ functional-085003 ssh sudo umount -f /mount-9p                                                                                    │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ mount          │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount2 --alsologtostderr -v=1                │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ ssh            │ functional-085003 ssh findmnt -T /mount1                                                                                          │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ mount          │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount1 --alsologtostderr -v=1                │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ mount          │ -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount3 --alsologtostderr -v=1                │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ ssh            │ functional-085003 ssh findmnt -T /mount1                                                                                          │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ ssh            │ functional-085003 ssh findmnt -T /mount2                                                                                          │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ ssh            │ functional-085003 ssh findmnt -T /mount3                                                                                          │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ mount          │ -p functional-085003 --kill=true                                                                                                  │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ update-context │ functional-085003 update-context --alsologtostderr -v=2                                                                           │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ update-context │ functional-085003 update-context --alsologtostderr -v=2                                                                           │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ update-context │ functional-085003 update-context --alsologtostderr -v=2                                                                           │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image          │ functional-085003 image ls --format short --alsologtostderr                                                                       │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image          │ functional-085003 image ls --format yaml --alsologtostderr                                                                        │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ ssh            │ functional-085003 ssh pgrep buildkitd                                                                                             │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ image          │ functional-085003 image build -t localhost/my-image:functional-085003 testdata/build --alsologtostderr                            │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image          │ functional-085003 image ls                                                                                                        │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image          │ functional-085003 image ls --format json --alsologtostderr                                                                        │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image          │ functional-085003 image ls --format table --alsologtostderr                                                                       │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:13:20
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:13:20.315190 1170191 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:13:20.315402 1170191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:13:20.315415 1170191 out.go:374] Setting ErrFile to fd 2...
	I0929 13:13:20.315421 1170191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:13:20.315795 1170191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:13:20.316202 1170191 out.go:368] Setting JSON to false
	I0929 13:13:20.317277 1170191 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17753,"bootTime":1759133848,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 13:13:20.317357 1170191 start.go:140] virtualization:  
	I0929 13:13:20.320787 1170191 out.go:179] * [functional-085003] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0929 13:13:20.323722 1170191 notify.go:220] Checking for updates...
	I0929 13:13:20.324274 1170191 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:13:20.327568 1170191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:13:20.330545 1170191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:13:20.338424 1170191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 13:13:20.342498 1170191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 13:13:20.345440 1170191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:13:20.349413 1170191 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:13:20.350013 1170191 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:13:20.396652 1170191 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 13:13:20.396771 1170191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:13:20.473351 1170191 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 13:13:20.463525815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:13:20.473594 1170191 docker.go:318] overlay module found
	I0929 13:13:20.476738 1170191 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 13:13:20.479675 1170191 start.go:304] selected driver: docker
	I0929 13:13:20.479707 1170191 start.go:924] validating driver "docker" against &{Name:functional-085003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-085003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:13:20.479799 1170191 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:13:20.483411 1170191 out.go:203] 
	W0929 13:13:20.486380 1170191 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 13:13:20.489356 1170191 out.go:203] 
	
	
	==> Docker <==
	Sep 29 13:13:23 functional-085003 dockerd[6873]: time="2025-09-29T13:13:23.922247482Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:13:23 functional-085003 dockerd[6873]: time="2025-09-29T13:13:23.973993568Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 13:13:23 functional-085003 dockerd[6873]: time="2025-09-29T13:13:23.995034246Z" level=info msg="ignoring event" container=97992fe39d6a920c959784bd8e31624aa0c83f50ed8f6165bfded1fd43110101 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:13:24 functional-085003 dockerd[6873]: time="2025-09-29T13:13:24.060226640Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:13:24 functional-085003 dockerd[6873]: time="2025-09-29T13:13:24.775431945Z" level=info msg="ignoring event" container=18bc4443d3e4d9af7b72f950e4941208e5da24bfe924bea5e01b59419fb792a3 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:13:24 functional-085003 dockerd[6873]: time="2025-09-29T13:13:24.843701754Z" level=info msg="ignoring event" container=1a1634225e89e791dbef2ad7cc6f4044a4054db1c70cd05ac8dd607c56d21959 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:13:25 functional-085003 cri-dockerd[7633]: time="2025-09-29T13:13:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7f6576ee2b46e39941a14892a0ce7421e43a6bb9f5997df089f68f17cfdcfc7d/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 29 13:13:25 functional-085003 cri-dockerd[7633]: time="2025-09-29T13:13:25Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4f7ebe077ff342b988935a3f0ed1f6e6e3092536387183b611c8f96c9117e0d0/resolv.conf as [nameserver 10.96.0.10 search kubernetes-dashboard.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 29 13:13:25 functional-085003 dockerd[6873]: time="2025-09-29T13:13:25.822019435Z" level=info msg="ignoring event" container=7ea893543aba8811162f8b2b53a9acf176a9f6308165245cb76b0092f52d21ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:13:38 functional-085003 dockerd[6873]: time="2025-09-29T13:13:38.375503527Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 13:13:38 functional-085003 dockerd[6873]: time="2025-09-29T13:13:38.469087784Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:13:39 functional-085003 dockerd[6873]: time="2025-09-29T13:13:39.372306219Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:13:39 functional-085003 dockerd[6873]: time="2025-09-29T13:13:39.457309412Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:14:01 functional-085003 dockerd[6873]: time="2025-09-29T13:14:01.386439014Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 13:14:01 functional-085003 dockerd[6873]: time="2025-09-29T13:14:01.475908660Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:14:04 functional-085003 dockerd[6873]: time="2025-09-29T13:14:04.382462842Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:14:04 functional-085003 dockerd[6873]: time="2025-09-29T13:14:04.471105680Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:14:52 functional-085003 dockerd[6873]: time="2025-09-29T13:14:52.383759426Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 13:14:52 functional-085003 dockerd[6873]: time="2025-09-29T13:14:52.491469286Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:14:54 functional-085003 dockerd[6873]: time="2025-09-29T13:14:54.377680811Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:14:54 functional-085003 dockerd[6873]: time="2025-09-29T13:14:54.469318489Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:16:20 functional-085003 dockerd[6873]: time="2025-09-29T13:16:20.384288557Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:16:20 functional-085003 dockerd[6873]: time="2025-09-29T13:16:20.490186962Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 13:16:22 functional-085003 dockerd[6873]: time="2025-09-29T13:16:22.378751324Z" level=warning msg="reference for unknown type: " digest="sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" remote="docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 13:16:22 functional-085003 dockerd[6873]: time="2025-09-29T13:16:22.466725292Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	97992fe39d6a9       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   4 minutes ago       Exited              mount-munger              0                   7ea893543aba8       busybox-mount
	8f81d89a03581       nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e                         5 minutes ago       Running             myfrontend                0                   efb5a801b6e32       sp-pod
	c802a2b4c1896       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   09fe235bf6297       hello-node-75c85bcc94-x9877
	d4f47bf1a64ff       kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6           5 minutes ago       Running             echo-server               0                   ce916d84550eb       hello-node-connect-7d85dfc575-tk885
	a9f0809fdcb35       nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                         5 minutes ago       Running             nginx                     0                   a87a129b9b23c       nginx-svc
	813a12a4bb87c       6fc32d66c1411                                                                                         5 minutes ago       Running             kube-proxy                3                   e504852c80e84       kube-proxy-dcjhv
	74a272a8012a4       138784d87c9c5                                                                                         5 minutes ago       Running             coredns                   2                   a00f1a467827d       coredns-66bc5c9577-gcpkj
	edbc65c8dbf7e       ba04bb24b9575                                                                                         6 minutes ago       Running             storage-provisioner       3                   e24fc34c77368       storage-provisioner
	78e62c3b505c1       a1894772a478e                                                                                         6 minutes ago       Running             etcd                      2                   c3bcf2050ccf3       etcd-functional-085003
	fd58c889dfb04       a25f5ef9c34c3                                                                                         6 minutes ago       Running             kube-scheduler            3                   096fdcc1c3544       kube-scheduler-functional-085003
	ae17de939d81b       d291939e99406                                                                                         6 minutes ago       Running             kube-apiserver            0                   570fdddd688b1       kube-apiserver-functional-085003
	9725cf38cb6d4       996be7e86d9b3                                                                                         6 minutes ago       Running             kube-controller-manager   3                   e478c3657f6c1       kube-controller-manager-functional-085003
	e4cc66c08b947       996be7e86d9b3                                                                                         6 minutes ago       Created             kube-controller-manager   2                   dcb03e1de22c3       kube-controller-manager-functional-085003
	3de6c7074f1ff       a25f5ef9c34c3                                                                                         6 minutes ago       Created             kube-scheduler            2                   6ddcdc36d3c35       kube-scheduler-functional-085003
	860e9b282b4e5       6fc32d66c1411                                                                                         6 minutes ago       Created             kube-proxy                2                   e0674ef3ec646       kube-proxy-dcjhv
	1efca39d65ab9       ba04bb24b9575                                                                                         6 minutes ago       Exited              storage-provisioner       2                   07c55813a683a       storage-provisioner
	63e129dd664e8       138784d87c9c5                                                                                         7 minutes ago       Exited              coredns                   1                   29f21fe92d2a1       coredns-66bc5c9577-gcpkj
	d777207fbabf0       a1894772a478e                                                                                         7 minutes ago       Exited              etcd                      1                   2cf4e0cbe8eec       etcd-functional-085003
	
	
	==> coredns [63e129dd664e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: namespaces is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "namespaces" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: services is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "services" in API group "" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: endpointslices.discovery.k8s.io is forbidden: User "system:serviceaccount:kube-system:coredns" cannot list resource "endpointslices" in API group "discovery.k8s.io" at the cluster scope
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46073 - 23168 "HINFO IN 6857215695404878237.5113018390212035553. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013457548s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [74a272a8012a] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37438 - 18741 "HINFO IN 8008677431241937798.8580229539547384357. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.024601397s
	
	
	==> describe nodes <==
	Name:               functional-085003
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-085003
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=functional-085003
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_09_56_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:09:52 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-085003
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:18:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:13:51 +0000   Mon, 29 Sep 2025 13:09:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:13:51 +0000   Mon, 29 Sep 2025 13:09:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:13:51 +0000   Mon, 29 Sep 2025 13:09:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:13:51 +0000   Mon, 29 Sep 2025 13:09:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-085003
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 04fcb50a9a8a45e5bca583ff33deba90
	  System UUID:                7a64509d-22b6-4698-b144-02838e29693b
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-x9877                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m11s
	  default                     hello-node-connect-7d85dfc575-tk885           0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m21s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m31s
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m10s
	  kube-system                 coredns-66bc5c9577-gcpkj                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m21s
	  kube-system                 etcd-functional-085003                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m26s
	  kube-system                 kube-apiserver-functional-085003              250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m
	  kube-system                 kube-controller-manager-functional-085003     200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 kube-proxy-dcjhv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m21s
	  kube-system                 kube-scheduler-functional-085003              100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m26s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m20s
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-7n6xx    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-4dm9l         0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (2%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 8m19s                  kube-proxy       
	  Normal   Starting                 5m58s                  kube-proxy       
	  Normal   Starting                 7m3s                   kube-proxy       
	  Warning  CgroupV1                 8m33s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  8m33s (x8 over 8m33s)  kubelet          Node functional-085003 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m33s (x8 over 8m33s)  kubelet          Node functional-085003 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m33s (x7 over 8m33s)  kubelet          Node functional-085003 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  8m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    8m26s                  kubelet          Node functional-085003 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 8m26s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8m26s                  kubelet          Node functional-085003 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     8m26s                  kubelet          Node functional-085003 status is now: NodeHasSufficientPID
	  Normal   Starting                 8m26s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           8m22s                  node-controller  Node functional-085003 event: Registered Node functional-085003 in Controller
	  Normal   NodeNotReady             7m15s                  kubelet          Node functional-085003 status is now: NodeNotReady
	  Normal   RegisteredNode           7m2s                   node-controller  Node functional-085003 event: Registered Node functional-085003 in Controller
	  Warning  ContainerGCFailed        6m26s (x2 over 7m26s)  kubelet          rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	  Normal   NodeHasNoDiskPressure    6m7s (x8 over 6m7s)    kubelet          Node functional-085003 status is now: NodeHasNoDiskPressure
	  Warning  CgroupV1                 6m7s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  6m7s (x8 over 6m7s)    kubelet          Node functional-085003 status is now: NodeHasSufficientMemory
	  Normal   Starting                 6m7s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientPID     6m7s (x7 over 6m7s)    kubelet          Node functional-085003 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m7s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           5m58s                  node-controller  Node functional-085003 event: Registered Node functional-085003 in Controller
	
	
	==> dmesg <==
	[Sep29 11:47] kauditd_printk_skb: 8 callbacks suppressed
	[Sep29 12:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [78e62c3b505c] <==
	{"level":"warn","ts":"2025-09-29T13:12:19.273158Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.287652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.303681Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.318910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.334142Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.348263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39900","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.366386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.381352Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.396482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39946","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.419804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.433952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39984","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.450381Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.466784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.481567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40052","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.500959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.513423Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.529062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.544556Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.559607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.575736Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40144","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.596035Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.623761Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40176","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.638658Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40194","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.653183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:12:19.722685Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40226","server-name":"","error":"EOF"}
	
	
	==> etcd [d777207fbabf] <==
	{"level":"warn","ts":"2025-09-29T13:11:16.172828Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:16.194673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:16.213943Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:16.254695Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:16.265879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:16.290712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:11:16.469568Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56950","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:11:58.380593Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T13:11:58.380654Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-085003","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-29T13:11:58.380770Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T13:12:05.383074Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"warn","ts":"2025-09-29T13:12:05.383418Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T13:12:05.383503Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T13:12:05.383554Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T13:12:05.383914Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T13:12:05.383987Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T13:12:05.384039Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"error","ts":"2025-09-29T13:12:05.383174Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T13:12:05.387995Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-09-29T13:12:05.390967Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T13:12:05.390985Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T13:12:05.394950Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-29T13:12:05.395040Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T13:12:05.395077Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-29T13:12:05.395090Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-085003","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 13:18:21 up  5:00,  0 users,  load average: 0.06, 0.81, 1.86
	Linux functional-085003 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [ae17de939d81] <==
	I0929 13:12:21.440895       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 13:12:22.221470       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 13:12:22.267292       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:12:22.308851       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0929 13:12:22.317592       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0929 13:12:23.880412       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 13:12:24.128422       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 13:12:24.279822       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 13:12:37.430512       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.99.3.165"}
	I0929 13:12:50.665285       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.100.246.229"}
	I0929 13:13:00.520989       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.104.60.160"}
	I0929 13:13:10.217290       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.103.223.205"}
	E0929 13:13:11.213362       1 watch.go:272] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E0929 13:13:18.597022       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51700: use of closed network connection
	I0929 13:13:21.788055       1 controller.go:667] quota admission added evaluator for: namespaces
	I0929 13:13:22.096442       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.146.72"}
	I0929 13:13:22.119960       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.215.166"}
	I0929 13:13:28.498338       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:13:29.829449       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:14:40.133903       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:14:58.827623       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:16:09.059467       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:16:13.894607       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:17:15.760270       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:17:20.174821       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [9725cf38cb6d] <==
	I0929 13:12:23.897356       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 13:12:23.897364       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 13:12:23.897371       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 13:12:23.900258       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0929 13:12:23.901500       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 13:12:23.901641       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 13:12:23.901764       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-085003"
	I0929 13:12:23.901837       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 13:12:23.907200       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 13:12:23.913942       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 13:12:23.917200       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 13:12:23.921838       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 13:12:23.923082       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 13:12:23.923090       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 13:12:23.923105       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 13:12:23.923114       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 13:12:23.926614       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 13:12:23.931917       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:12:23.931945       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:12:23.931954       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:12:23.936566       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0929 13:13:21.903075       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 13:13:21.906119       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 13:13:21.921822       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0929 13:13:21.925611       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e4cc66c08b94] <==
	
	
	==> kube-proxy [813a12a4bb87] <==
	I0929 13:12:22.529871       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:12:22.829441       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:12:22.938765       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:12:22.938809       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 13:12:22.938875       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:12:23.000374       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:12:23.000902       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:12:23.010480       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:12:23.010947       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:12:23.011886       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:12:23.013735       1 config.go:200] "Starting service config controller"
	I0929 13:12:23.015555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:12:23.013918       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:12:23.015772       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:12:23.013948       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:12:23.015789       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:12:23.021359       1 config.go:309] "Starting node config controller"
	I0929 13:12:23.021383       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:12:23.021391       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:12:23.116650       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:12:23.116744       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 13:12:23.116789       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [860e9b282b4e] <==
	
	
	==> kube-scheduler [3de6c7074f1f] <==
	
	
	==> kube-scheduler [fd58c889dfb0] <==
	I0929 13:12:20.459207       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:12:20.465442       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:12:20.465659       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 13:12:20.466739       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 13:12:20.466812       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 13:12:20.489038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:12:20.489354       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:12:20.489420       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:12:20.489549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 13:12:20.489668       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 13:12:20.489755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:12:20.489919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 13:12:20.490034       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:12:20.490207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:12:20.490217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:12:20.490391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:12:20.490459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:12:20.490515       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 13:12:20.496883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:12:20.497097       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:12:20.497279       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 13:12:20.497491       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:12:20.497665       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 13:12:20.497839       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0929 13:12:22.066783       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 13:16:20 functional-085003 kubelet[8642]: E0929 13:16:20.493643    8642 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 13:16:20 functional-085003 kubelet[8642]: E0929 13:16:20.493727    8642 kuberuntime_manager.go:1449] "Unhandled Error" err="container kubernetes-dashboard start failed in pod kubernetes-dashboard-855c9754f9-4dm9l_kubernetes-dashboard(36569db3-c3cc-4e98-bc60-50502bd2cb31): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 13:16:20 functional-085003 kubelet[8642]: E0929 13:16:20.493762    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.469950    8642 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.470008    8642 kuberuntime_image.go:43] "Failed to pull image" err="Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" image="docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"
	Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.470088    8642 kuberuntime_manager.go:1449] "Unhandled Error" err="container dashboard-metrics-scraper start failed in pod dashboard-metrics-scraper-77bf4d6c4c-7n6xx_kubernetes-dashboard(b22190f4-f2ef-47d5-9c65-4b4e3c1b9906): ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Sep 29 13:16:22 functional-085003 kubelet[8642]: E0929 13:16:22.470123    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ErrImagePull: \"Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:16:33 functional-085003 kubelet[8642]: E0929 13:16:33.334460    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:16:38 functional-085003 kubelet[8642]: E0929 13:16:38.335151    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:16:44 functional-085003 kubelet[8642]: E0929 13:16:44.336570    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:16:50 functional-085003 kubelet[8642]: E0929 13:16:50.336649    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:16:55 functional-085003 kubelet[8642]: E0929 13:16:55.332420    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:17:03 functional-085003 kubelet[8642]: E0929 13:17:03.332947    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:17:07 functional-085003 kubelet[8642]: E0929 13:17:07.333209    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:17:17 functional-085003 kubelet[8642]: E0929 13:17:17.332972    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:17:20 functional-085003 kubelet[8642]: E0929 13:17:20.340303    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:17:29 functional-085003 kubelet[8642]: E0929 13:17:29.332499    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:17:35 functional-085003 kubelet[8642]: E0929 13:17:35.333394    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:17:41 functional-085003 kubelet[8642]: E0929 13:17:41.333477    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:17:47 functional-085003 kubelet[8642]: E0929 13:17:47.333049    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:17:55 functional-085003 kubelet[8642]: E0929 13:17:55.333437    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:18:00 functional-085003 kubelet[8642]: E0929 13:18:00.334817    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:18:07 functional-085003 kubelet[8642]: E0929 13:18:07.332773    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	Sep 29 13:18:12 functional-085003 kubelet[8642]: E0929 13:18:12.334171    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-4dm9l" podUID="36569db3-c3cc-4e98-bc60-50502bd2cb31"
	Sep 29 13:18:19 functional-085003 kubelet[8642]: E0929 13:18:19.332942    8642 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-7n6xx" podUID="b22190f4-f2ef-47d5-9c65-4b4e3c1b9906"
	
	
	==> storage-provisioner [1efca39d65ab] <==
	I0929 13:11:30.660122       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 13:11:30.660314       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0929 13:11:30.662679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:34.117321       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:38.378349       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:41.976639       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:45.031493       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:48.054619       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:48.059934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 13:11:48.060161       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 13:11:48.060363       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-085003_fa44cbea-4d4b-4476-b6da-1bfa78995fba!
	I0929 13:11:48.061220       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e43ed296-0deb-4fde-872b-2c4d0fef1b50", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-085003_fa44cbea-4d4b-4476-b6da-1bfa78995fba became leader
	W0929 13:11:48.064059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:48.070256       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0929 13:11:48.161084       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-085003_fa44cbea-4d4b-4476-b6da-1bfa78995fba!
	W0929 13:11:50.073919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:50.079098       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:52.082333       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:52.089776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:54.092852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:54.098082       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:56.102374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:56.109572       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:58.114591       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:11:58.119813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [edbc65c8dbf7] <==
	W0929 13:17:57.097670       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:17:59.100570       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:17:59.107312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:01.110860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:01.115826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:03.118413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:03.123163       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:05.126398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:05.133880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:07.137664       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:07.142588       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:09.145387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:09.149956       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:11.153373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:11.158035       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:13.161823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:13.169652       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:15.172974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:15.177245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:17.180095       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:17.184939       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:19.187772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:19.192148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:21.195546       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 13:18:21.200189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-085003 -n functional-085003
helpers_test.go:269: (dbg) Run:  kubectl --context functional-085003 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-085003 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context functional-085003 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l: exit status 1 (97.426987ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-085003/192.168.49.2
	Start Time:       Mon, 29 Sep 2025 13:13:21 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.12
	IPs:
	  IP:  10.244.0.12
	Containers:
	  mount-munger:
	    Container ID:  docker://97992fe39d6a920c959784bd8e31624aa0c83f50ed8f6165bfded1fd43110101
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      docker-pullable://gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 29 Sep 2025 13:13:23 +0000
	      Finished:     Mon, 29 Sep 2025 13:13:23 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7ws7t (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-7ws7t:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  5m1s   default-scheduler  Successfully assigned default/busybox-mount to functional-085003
	  Normal  Pulling    5m1s   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     4m59s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.154s (2.155s including waiting). Image size: 3547125 bytes.
	  Normal  Created    4m59s  kubelet            Created container: mount-munger
	  Normal  Started    4m59s  kubelet            Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "dashboard-metrics-scraper-77bf4d6c4c-7n6xx" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-4dm9l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context functional-085003 describe pod busybox-mount dashboard-metrics-scraper-77bf4d6c4c-7n6xx kubernetes-dashboard-855c9754f9-4dm9l: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (302.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (12.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 kubectl -- rollout status deployment/busybox: (5.508673139s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.io: exit status 1 (251.858743ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.io'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-7b57f96db7-2lt6z could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default: exit status 1 (276.951003ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-7b57f96db7-2lt6z could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (241.488193ms)

                                                
                                                
-- stdout --
	Server:    10.96.0.10
	Address 1: 10.96.0.10
	

                                                
                                                
-- /stdout --
** stderr ** 
	nslookup: can't resolve 'kubernetes.default.svc.cluster.local'
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-7b57f96db7-2lt6z could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.default.svc.cluster.local
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-399583
helpers_test.go:243: (dbg) docker inspect ha-399583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0",
	        "Created": "2025-09-29T13:18:30.192674344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1175337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:18:30.249493703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/hosts",
	        "LogPath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0-json.log",
	        "Name": "/ha-399583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-399583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-399583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0",
	                "LowerDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-399583",
	                "Source": "/var/lib/docker/volumes/ha-399583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-399583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-399583",
	                "name.minikube.sigs.k8s.io": "ha-399583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c434fd6d9a32fd353d4c5388bdbf4bc9ebfdd2f75c7ea365d882b05b65a187",
	            "SandboxKey": "/var/run/docker/netns/93c434fd6d9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33942"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-399583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:05:70:ec:5f:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85cc826cc833d1082aa1a3789e79bbf0a30c36137b1e336517db46ba97d3357c",
	                    "EndpointID": "6885f8b403088835e27b130473eb4cf9ec77d0dfd6bf48e4f1c2d359f5836ab8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-399583",
	                        "4ff0a10009db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-399583 -n ha-399583
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 logs -n 25: (1.469290257s)
helpers_test.go:260: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                       ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image   │ functional-085003 image ls --format short --alsologtostderr                                                       │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image   │ functional-085003 image ls --format yaml --alsologtostderr                                                        │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ ssh     │ functional-085003 ssh pgrep buildkitd                                                                             │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ image   │ functional-085003 image build -t localhost/my-image:functional-085003 testdata/build --alsologtostderr            │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image   │ functional-085003 image ls                                                                                        │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image   │ functional-085003 image ls --format json --alsologtostderr                                                        │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image   │ functional-085003 image ls --format table --alsologtostderr                                                       │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ delete  │ -p functional-085003                                                                                              │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:18 UTC │ 29 Sep 25 13:18 UTC │
	│ start   │ ha-399583 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:18 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                  │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- rollout status deployment/busybox                                                            │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                              │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                             │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.io                                      │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │                     │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.io                                      │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.io                                      │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.io                                      │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default                                 │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.default                                 │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.default                                 │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.default                                 │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default.svc.cluster.local               │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.default.svc.cluster.local               │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.default.svc.cluster.local               │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.default.svc.cluster.local               │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:18:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:18:25.325660 1174954 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:18:25.325856 1174954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:18:25.325888 1174954 out.go:374] Setting ErrFile to fd 2...
	I0929 13:18:25.325911 1174954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:18:25.326183 1174954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:18:25.326627 1174954 out.go:368] Setting JSON to false
	I0929 13:18:25.327555 1174954 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18058,"bootTime":1759133848,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 13:18:25.327654 1174954 start.go:140] virtualization:  
	I0929 13:18:25.331392 1174954 out.go:179] * [ha-399583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 13:18:25.335709 1174954 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:18:25.335907 1174954 notify.go:220] Checking for updates...
	I0929 13:18:25.342060 1174954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:18:25.345287 1174954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:18:25.348296 1174954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 13:18:25.351485 1174954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 13:18:25.354476 1174954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:18:25.357728 1174954 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:18:25.390086 1174954 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 13:18:25.390212 1174954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:18:25.451517 1174954 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-29 13:18:25.44201949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:18:25.451633 1174954 docker.go:318] overlay module found
	I0929 13:18:25.454860 1174954 out.go:179] * Using the docker driver based on user configuration
	I0929 13:18:25.457805 1174954 start.go:304] selected driver: docker
	I0929 13:18:25.457828 1174954 start.go:924] validating driver "docker" against <nil>
	I0929 13:18:25.457843 1174954 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:18:25.458546 1174954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:18:25.528041 1174954 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-29 13:18:25.519102084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:18:25.528194 1174954 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 13:18:25.528429 1174954 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:18:25.531456 1174954 out.go:179] * Using Docker driver with root privileges
	I0929 13:18:25.534332 1174954 cni.go:84] Creating CNI manager for ""
	I0929 13:18:25.534410 1174954 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0929 13:18:25.534423 1174954 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 13:18:25.534514 1174954 start.go:348] cluster config:
	{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:18:25.537689 1174954 out.go:179] * Starting "ha-399583" primary control-plane node in "ha-399583" cluster
	I0929 13:18:25.540683 1174954 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:18:25.543629 1174954 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:18:25.546597 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:18:25.546663 1174954 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 13:18:25.546694 1174954 cache.go:58] Caching tarball of preloaded images
	I0929 13:18:25.546692 1174954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:18:25.546795 1174954 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 13:18:25.546806 1174954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 13:18:25.547172 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:18:25.547202 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json: {Name:mkae797a6658ba3b436ea5ee875282b75c92e17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:25.565926 1174954 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:18:25.565953 1174954 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:18:25.565967 1174954 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:18:25.565991 1174954 start.go:360] acquireMachinesLock for ha-399583: {Name:mk6a93adabf6340a9742e1fe127a7da8b14537cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:18:25.566094 1174954 start.go:364] duration metric: took 87µs to acquireMachinesLock for "ha-399583"
	I0929 13:18:25.566126 1174954 start.go:93] Provisioning new machine with config: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:18:25.566199 1174954 start.go:125] createHost starting for "" (driver="docker")
	I0929 13:18:25.569651 1174954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:18:25.569898 1174954 start.go:159] libmachine.API.Create for "ha-399583" (driver="docker")
	I0929 13:18:25.569936 1174954 client.go:168] LocalClient.Create starting
	I0929 13:18:25.570028 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 13:18:25.570066 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:18:25.570084 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:18:25.570150 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 13:18:25.570170 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:18:25.570183 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:18:25.570548 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 13:18:25.586603 1174954 cli_runner.go:211] docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 13:18:25.586701 1174954 network_create.go:284] running [docker network inspect ha-399583] to gather additional debugging logs...
	I0929 13:18:25.586722 1174954 cli_runner.go:164] Run: docker network inspect ha-399583
	W0929 13:18:25.602505 1174954 cli_runner.go:211] docker network inspect ha-399583 returned with exit code 1
	I0929 13:18:25.602536 1174954 network_create.go:287] error running [docker network inspect ha-399583]: docker network inspect ha-399583: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-399583 not found
	I0929 13:18:25.602550 1174954 network_create.go:289] output of [docker network inspect ha-399583]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-399583 not found
	
	** /stderr **
	I0929 13:18:25.602658 1174954 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:18:25.618134 1174954 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017e6c80}
	I0929 13:18:25.618170 1174954 network_create.go:124] attempt to create docker network ha-399583 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 13:18:25.618223 1174954 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-399583 ha-399583
	I0929 13:18:25.670767 1174954 network_create.go:108] docker network ha-399583 192.168.49.0/24 created
	I0929 13:18:25.670801 1174954 kic.go:121] calculated static IP "192.168.49.2" for the "ha-399583" container
	I0929 13:18:25.670875 1174954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:18:25.686104 1174954 cli_runner.go:164] Run: docker volume create ha-399583 --label name.minikube.sigs.k8s.io=ha-399583 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:18:25.703494 1174954 oci.go:103] Successfully created a docker volume ha-399583
	I0929 13:18:25.703602 1174954 cli_runner.go:164] Run: docker run --rm --name ha-399583-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583 --entrypoint /usr/bin/test -v ha-399583:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:18:26.253990 1174954 oci.go:107] Successfully prepared a docker volume ha-399583
	I0929 13:18:26.254053 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:18:26.254085 1174954 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:18:26.254161 1174954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:18:30.123296 1174954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.869094796s)
	I0929 13:18:30.123334 1174954 kic.go:203] duration metric: took 3.869254742s to extract preloaded images to volume ...
	W0929 13:18:30.123495 1174954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 13:18:30.123608 1174954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:18:30.176759 1174954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-399583 --name ha-399583 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-399583 --network ha-399583 --ip 192.168.49.2 --volume ha-399583:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:18:30.460613 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Running}}
	I0929 13:18:30.492877 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:18:30.519153 1174954 cli_runner.go:164] Run: docker exec ha-399583 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:18:30.569683 1174954 oci.go:144] the created container "ha-399583" has a running status.
	I0929 13:18:30.569719 1174954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa...
	I0929 13:18:30.855932 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0929 13:18:30.856058 1174954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:18:30.877650 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:18:30.898680 1174954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:18:30.898699 1174954 kic_runner.go:114] Args: [docker exec --privileged ha-399583 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:18:30.961194 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:18:30.996709 1174954 machine.go:93] provisionDockerMachine start ...
	I0929 13:18:30.996801 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:31.036817 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:31.037146 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:31.037162 1174954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:18:31.037867 1174954 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59688->127.0.0.1:33938: read: connection reset by peer
	I0929 13:18:34.175845 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583
	
	I0929 13:18:34.175867 1174954 ubuntu.go:182] provisioning hostname "ha-399583"
	I0929 13:18:34.175962 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:34.193970 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:34.194295 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:34.194311 1174954 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-399583 && echo "ha-399583" | sudo tee /etc/hostname
	I0929 13:18:34.344270 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583
	
	I0929 13:18:34.344370 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:34.361798 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:34.362119 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:34.362142 1174954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-399583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-399583/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-399583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:18:34.500384 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:18:34.500414 1174954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 13:18:34.500433 1174954 ubuntu.go:190] setting up certificates
	I0929 13:18:34.500487 1174954 provision.go:84] configureAuth start
	I0929 13:18:34.500574 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583
	I0929 13:18:34.517650 1174954 provision.go:143] copyHostCerts
	I0929 13:18:34.517695 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:18:34.517730 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 13:18:34.517742 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:18:34.517851 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 13:18:34.517942 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:18:34.517965 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 13:18:34.517976 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:18:34.518003 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 13:18:34.518049 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:18:34.518069 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 13:18:34.518078 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:18:34.518102 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 13:18:34.518153 1174954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.ha-399583 san=[127.0.0.1 192.168.49.2 ha-399583 localhost minikube]
	I0929 13:18:35.154273 1174954 provision.go:177] copyRemoteCerts
	I0929 13:18:35.154354 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:18:35.154396 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.175285 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:35.273188 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0929 13:18:35.273256 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:18:35.297804 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0929 13:18:35.297864 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0929 13:18:35.322638 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0929 13:18:35.322704 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:18:35.347450 1174954 provision.go:87] duration metric: took 846.935387ms to configureAuth
	I0929 13:18:35.347480 1174954 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:18:35.347731 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:18:35.347798 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.365115 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:35.365432 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:35.365448 1174954 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 13:18:35.505044 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 13:18:35.505065 1174954 ubuntu.go:71] root file system type: overlay
	I0929 13:18:35.505178 1174954 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 13:18:35.505240 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.522915 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:35.523214 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:35.523296 1174954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 13:18:35.677571 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 13:18:35.677698 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.695403 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:35.695708 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:35.695730 1174954 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 13:18:36.529221 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 13:18:35.670941125 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 13:18:36.529300 1174954 machine.go:96] duration metric: took 5.532566432s to provisionDockerMachine
	I0929 13:18:36.529327 1174954 client.go:171] duration metric: took 10.959380481s to LocalClient.Create
	I0929 13:18:36.529395 1174954 start.go:167] duration metric: took 10.959483827s to libmachine.API.Create "ha-399583"
	I0929 13:18:36.529434 1174954 start.go:293] postStartSetup for "ha-399583" (driver="docker")
	I0929 13:18:36.529459 1174954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:18:36.529556 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:18:36.529638 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.554644 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:36.653460 1174954 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:18:36.656534 1174954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:18:36.656571 1174954 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:18:36.656581 1174954 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:18:36.656588 1174954 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:18:36.656598 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 13:18:36.656655 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 13:18:36.656743 1174954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 13:18:36.656755 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /etc/ssl/certs/11276402.pem
	I0929 13:18:36.656864 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:18:36.665215 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:18:36.689070 1174954 start.go:296] duration metric: took 159.606314ms for postStartSetup
	I0929 13:18:36.689535 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583
	I0929 13:18:36.706663 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:18:36.706952 1174954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:18:36.707015 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.723615 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:36.821307 1174954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:18:36.825898 1174954 start.go:128] duration metric: took 11.25968165s to createHost
	I0929 13:18:36.825921 1174954 start.go:83] releasing machines lock for "ha-399583", held for 11.259812623s
	I0929 13:18:36.825994 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583
	I0929 13:18:36.847632 1174954 ssh_runner.go:195] Run: cat /version.json
	I0929 13:18:36.847697 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.847948 1174954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:18:36.848012 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.866297 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:36.868804 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:37.090836 1174954 ssh_runner.go:195] Run: systemctl --version
	I0929 13:18:37.095122 1174954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:18:37.099451 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:18:37.125013 1174954 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:18:37.125093 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:18:37.155198 1174954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:18:37.155225 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:18:37.155260 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:18:37.155359 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:18:37.171885 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:18:37.181883 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:18:37.191626 1174954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 13:18:37.191691 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 13:18:37.201359 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:18:37.211710 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:18:37.222842 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:18:37.232813 1174954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:18:37.242409 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:18:37.252058 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:18:37.261972 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:18:37.271400 1174954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:18:37.279991 1174954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:18:37.288465 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:37.371632 1174954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:18:37.458610 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:18:37.458712 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:18:37.458782 1174954 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 13:18:37.471372 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:18:37.483533 1174954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:18:37.506680 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:18:37.518759 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:18:37.531288 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:18:37.548173 1174954 ssh_runner.go:195] Run: which cri-dockerd
	I0929 13:18:37.551762 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 13:18:37.560848 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 13:18:37.579206 1174954 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 13:18:37.671443 1174954 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 13:18:37.763836 1174954 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 13:18:37.764018 1174954 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 13:18:37.783762 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 13:18:37.796204 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:37.889023 1174954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 13:18:38.285326 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:18:38.297127 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 13:18:38.309301 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:18:38.321343 1174954 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 13:18:38.415073 1174954 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 13:18:38.506484 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:38.587249 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 13:18:38.601893 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 13:18:38.613784 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:38.709996 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 13:18:38.779650 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:18:38.792850 1174954 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 13:18:38.792919 1174954 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 13:18:38.796394 1174954 start.go:563] Will wait 60s for crictl version
	I0929 13:18:38.796457 1174954 ssh_runner.go:195] Run: which crictl
	I0929 13:18:38.800012 1174954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:18:38.840429 1174954 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 13:18:38.840596 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:18:38.863675 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:18:38.892228 1174954 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 13:18:38.892348 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:18:38.908433 1174954 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 13:18:38.912222 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:18:38.923132 1174954 kubeadm.go:875] updating cluster {Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:18:38.923255 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:18:38.923319 1174954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 13:18:38.942060 1174954 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 13:18:38.942085 1174954 docker.go:621] Images already preloaded, skipping extraction
	I0929 13:18:38.942148 1174954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 13:18:38.961435 1174954 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 13:18:38.961461 1174954 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:18:38.961471 1174954 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0929 13:18:38.961561 1174954 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-399583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:18:38.961640 1174954 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 13:18:39.017894 1174954 cni.go:84] Creating CNI manager for ""
	I0929 13:18:39.017916 1174954 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0929 13:18:39.017927 1174954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:18:39.017951 1174954 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-399583 NodeName:ha-399583 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:18:39.018079 1174954 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-399583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:18:39.018100 1174954 kube-vip.go:115] generating kube-vip config ...
	I0929 13:18:39.018158 1174954 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0929 13:18:39.031327 1174954 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:18:39.031430 1174954 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0929 13:18:39.031495 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:18:39.040417 1174954 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:18:39.040488 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0929 13:18:39.049491 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0929 13:18:39.067606 1174954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:18:39.086175 1174954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0929 13:18:39.104681 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0929 13:18:39.122790 1174954 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0929 13:18:39.126235 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:18:39.136919 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:39.226079 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:18:39.242061 1174954 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583 for IP: 192.168.49.2
	I0929 13:18:39.242094 1174954 certs.go:194] generating shared ca certs ...
	I0929 13:18:39.242110 1174954 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:39.242316 1174954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 13:18:39.242378 1174954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 13:18:39.242392 1174954 certs.go:256] generating profile certs ...
	I0929 13:18:39.242466 1174954 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key
	I0929 13:18:39.242485 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt with IP's: []
	I0929 13:18:39.957115 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt ...
	I0929 13:18:39.957148 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt: {Name:mk1d73907125fade7f91d0fe8012be0fdd8c8d6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:39.957386 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key ...
	I0929 13:18:39.957402 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key: {Name:mk7e3fe444e6167839184499439714d7a7842523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:39.957500 1174954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115
	I0929 13:18:39.957518 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0929 13:18:40.191674 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115 ...
	I0929 13:18:40.191712 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115: {Name:mkd4d8e4bece92b6c9105bc5a6d7f51e2f611f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.191913 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115 ...
	I0929 13:18:40.191928 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115: {Name:mk208156a7f3fea25f75539b023d4edfe837050e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.192026 1174954 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt
	I0929 13:18:40.192111 1174954 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key
	I0929 13:18:40.192172 1174954 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key
	I0929 13:18:40.192193 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt with IP's: []
	I0929 13:18:40.468298 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt ...
	I0929 13:18:40.468331 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt: {Name:mk35b7db6803c80f90ba766bd1daace4cc8b3e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.468539 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key ...
	I0929 13:18:40.468554 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key: {Name:mk25bd552775c6992e7bb37dd60dfd938facc3eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.468640 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0929 13:18:40.468662 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0929 13:18:40.468675 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0929 13:18:40.468691 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0929 13:18:40.468704 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0929 13:18:40.468720 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0929 13:18:40.468731 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0929 13:18:40.468749 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0929 13:18:40.468802 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 13:18:40.468844 1174954 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 13:18:40.468858 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:18:40.468883 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:18:40.468916 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:18:40.468942 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 13:18:40.468998 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:18:40.469031 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.469047 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.469063 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem -> /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.469689 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:18:40.495407 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:18:40.519592 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:18:40.544498 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:18:40.569218 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:18:40.593655 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:18:40.618016 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:18:40.642675 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:18:40.666400 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 13:18:40.691296 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:18:40.716474 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 13:18:40.741528 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:18:40.759513 1174954 ssh_runner.go:195] Run: openssl version
	I0929 13:18:40.765304 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 13:18:40.774825 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.778379 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.778495 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.785766 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 13:18:40.795303 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 13:18:40.805885 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.809909 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.809976 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.817495 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:18:40.827153 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:18:40.839099 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.843069 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.843150 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.850770 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:18:40.863600 1174954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:18:40.866904 1174954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:18:40.866957 1174954 kubeadm.go:392] StartCluster: {Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:18:40.867089 1174954 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 13:18:40.884822 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:18:40.893826 1174954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 13:18:40.902745 1174954 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 13:18:40.902829 1174954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 13:18:40.911734 1174954 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 13:18:40.911800 1174954 kubeadm.go:157] found existing configuration files:
	
	I0929 13:18:40.911867 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 13:18:40.921241 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 13:18:40.921326 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 13:18:40.930129 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 13:18:40.939251 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 13:18:40.939329 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 13:18:40.947772 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 13:18:40.956866 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 13:18:40.956931 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 13:18:40.965577 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 13:18:40.974666 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 13:18:40.974738 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 13:18:40.983386 1174954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 13:18:41.029892 1174954 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 13:18:41.030142 1174954 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 13:18:41.051235 1174954 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 13:18:41.051410 1174954 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 13:18:41.051488 1174954 kubeadm.go:310] OS: Linux
	I0929 13:18:41.051573 1174954 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 13:18:41.051649 1174954 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 13:18:41.051731 1174954 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 13:18:41.051818 1174954 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 13:18:41.051899 1174954 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 13:18:41.052026 1174954 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 13:18:41.052116 1174954 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 13:18:41.052191 1174954 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 13:18:41.052274 1174954 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 13:18:41.111979 1174954 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 13:18:41.112139 1174954 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 13:18:41.112240 1174954 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 13:18:41.128166 1174954 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 13:18:41.134243 1174954 out.go:252]   - Generating certificates and keys ...
	I0929 13:18:41.134350 1174954 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 13:18:41.134424 1174954 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 13:18:41.453292 1174954 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 13:18:42.122861 1174954 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 13:18:42.827036 1174954 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 13:18:43.073920 1174954 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 13:18:43.351514 1174954 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 13:18:43.351819 1174954 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-399583 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 13:18:43.533844 1174954 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 13:18:43.534171 1174954 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-399583 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 13:18:44.006192 1174954 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 13:18:44.750617 1174954 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 13:18:45.548351 1174954 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 13:18:45.548663 1174954 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 13:18:46.293239 1174954 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 13:18:46.349784 1174954 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 13:18:46.488964 1174954 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 13:18:47.153378 1174954 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 13:18:48.135742 1174954 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 13:18:48.136561 1174954 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 13:18:48.139329 1174954 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 13:18:48.142720 1174954 out.go:252]   - Booting up control plane ...
	I0929 13:18:48.142849 1174954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 13:18:48.142937 1174954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 13:18:48.143354 1174954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 13:18:48.155872 1174954 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 13:18:48.156214 1174954 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 13:18:48.163579 1174954 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 13:18:48.164089 1174954 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 13:18:48.164366 1174954 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 13:18:48.258843 1174954 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 13:18:48.258985 1174954 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 13:18:49.256853 1174954 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00160354s
	I0929 13:18:49.259541 1174954 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 13:18:49.259640 1174954 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 13:18:49.259999 1174954 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 13:18:49.260110 1174954 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 13:18:53.286542 1174954 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.026540969s
	I0929 13:18:54.472772 1174954 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.213197139s
	I0929 13:18:58.487580 1174954 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 9.227933141s
	I0929 13:18:58.507229 1174954 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 13:18:58.522665 1174954 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 13:18:58.537089 1174954 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 13:18:58.537318 1174954 kubeadm.go:310] [mark-control-plane] Marking the node ha-399583 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 13:18:58.552394 1174954 kubeadm.go:310] [bootstrap-token] Using token: b3fy01.4kp1xgsz2v3o318m
	I0929 13:18:58.555478 1174954 out.go:252]   - Configuring RBAC rules ...
	I0929 13:18:58.555616 1174954 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 13:18:58.560478 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 13:18:58.570959 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 13:18:58.575207 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 13:18:58.579447 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 13:18:58.583815 1174954 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 13:18:58.894018 1174954 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 13:18:59.319477 1174954 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 13:18:59.895066 1174954 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 13:18:59.896410 1174954 kubeadm.go:310] 
	I0929 13:18:59.896498 1174954 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 13:18:59.896537 1174954 kubeadm.go:310] 
	I0929 13:18:59.896623 1174954 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 13:18:59.896632 1174954 kubeadm.go:310] 
	I0929 13:18:59.896659 1174954 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 13:18:59.896725 1174954 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 13:18:59.896784 1174954 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 13:18:59.896793 1174954 kubeadm.go:310] 
	I0929 13:18:59.896856 1174954 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 13:18:59.896865 1174954 kubeadm.go:310] 
	I0929 13:18:59.896915 1174954 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 13:18:59.896923 1174954 kubeadm.go:310] 
	I0929 13:18:59.896979 1174954 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 13:18:59.897061 1174954 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 13:18:59.897138 1174954 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 13:18:59.897146 1174954 kubeadm.go:310] 
	I0929 13:18:59.897234 1174954 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 13:18:59.897318 1174954 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 13:18:59.897326 1174954 kubeadm.go:310] 
	I0929 13:18:59.897414 1174954 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b3fy01.4kp1xgsz2v3o318m \
	I0929 13:18:59.897526 1174954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b \
	I0929 13:18:59.897552 1174954 kubeadm.go:310] 	--control-plane 
	I0929 13:18:59.897560 1174954 kubeadm.go:310] 
	I0929 13:18:59.897649 1174954 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 13:18:59.897657 1174954 kubeadm.go:310] 
	I0929 13:18:59.897743 1174954 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b3fy01.4kp1xgsz2v3o318m \
	I0929 13:18:59.897853 1174954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b 
	I0929 13:18:59.902447 1174954 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 13:18:59.902700 1174954 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 13:18:59.902817 1174954 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 13:18:59.902900 1174954 cni.go:84] Creating CNI manager for ""
	I0929 13:18:59.902949 1174954 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0929 13:18:59.907871 1174954 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 13:18:59.910633 1174954 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 13:18:59.914720 1174954 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 13:18:59.914744 1174954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 13:18:59.936309 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 13:19:00.547165 1174954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 13:19:00.547246 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:19:00.547317 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-399583 minikube.k8s.io/updated_at=2025_09_29T13_19_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=ha-399583 minikube.k8s.io/primary=true
	I0929 13:19:00.800539 1174954 ops.go:34] apiserver oom_adj: -16
	I0929 13:19:00.800650 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:19:00.918525 1174954 kubeadm.go:1105] duration metric: took 371.351132ms to wait for elevateKubeSystemPrivileges
	I0929 13:19:00.918560 1174954 kubeadm.go:394] duration metric: took 20.051606228s to StartCluster
	I0929 13:19:00.918586 1174954 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:00.918674 1174954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:19:00.919336 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:00.919578 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 13:19:00.919613 1174954 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:19:00.919681 1174954 addons.go:69] Setting storage-provisioner=true in profile "ha-399583"
	I0929 13:19:00.919695 1174954 addons.go:238] Setting addon storage-provisioner=true in "ha-399583"
	I0929 13:19:00.919594 1174954 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:00.919741 1174954 start.go:241] waiting for startup goroutines ...
	I0929 13:19:00.919718 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:00.919885 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:00.919922 1174954 addons.go:69] Setting default-storageclass=true in profile "ha-399583"
	I0929 13:19:00.919944 1174954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-399583"
	I0929 13:19:00.920349 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:00.920350 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:00.968615 1174954 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:19:00.971101 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 13:19:00.971678 1174954 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0929 13:19:00.971701 1174954 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0929 13:19:00.971706 1174954 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0929 13:19:00.971712 1174954 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0929 13:19:00.971716 1174954 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0929 13:19:00.971984 1174954 addons.go:238] Setting addon default-storageclass=true in "ha-399583"
	I0929 13:19:00.972022 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:00.972362 1174954 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:19:00.972381 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:19:00.972437 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:00.972811 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:00.973251 1174954 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0929 13:19:00.998874 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:01.017083 1174954 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:19:01.017107 1174954 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:19:01.017167 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:01.044762 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:01.151607 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 13:19:01.192885 1174954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:19:01.201000 1174954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:19:01.563960 1174954 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 13:19:01.885736 1174954 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 13:19:01.888798 1174954 addons.go:514] duration metric: took 969.143808ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 13:19:01.888892 1174954 start.go:246] waiting for cluster config update ...
	I0929 13:19:01.888953 1174954 start.go:255] writing updated cluster config ...
	I0929 13:19:01.891339 1174954 out.go:203] 
	I0929 13:19:01.894679 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:01.894835 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:01.898491 1174954 out.go:179] * Starting "ha-399583-m02" control-plane node in "ha-399583" cluster
	I0929 13:19:01.901483 1174954 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:19:01.904643 1174954 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:19:01.907503 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:01.907660 1174954 cache.go:58] Caching tarball of preloaded images
	I0929 13:19:01.907606 1174954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:19:01.908015 1174954 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 13:19:01.908053 1174954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 13:19:01.908215 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:01.930613 1174954 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:19:01.930643 1174954 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:19:01.930657 1174954 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:19:01.930682 1174954 start.go:360] acquireMachinesLock for ha-399583-m02: {Name:mkc66e87512662de4b81d9ad77cee2a1bd85fc84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:19:01.930803 1174954 start.go:364] duration metric: took 96.69µs to acquireMachinesLock for "ha-399583-m02"
	I0929 13:19:01.930836 1174954 start.go:93] Provisioning new machine with config: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:01.930915 1174954 start.go:125] createHost starting for "m02" (driver="docker")
	I0929 13:19:01.936288 1174954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:19:01.936402 1174954 start.go:159] libmachine.API.Create for "ha-399583" (driver="docker")
	I0929 13:19:01.936431 1174954 client.go:168] LocalClient.Create starting
	I0929 13:19:01.936496 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 13:19:01.936561 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:01.936597 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:01.936667 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 13:19:01.936692 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:01.936707 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:01.936965 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:19:01.964760 1174954 network_create.go:77] Found existing network {name:ha-399583 subnet:0x4001bf5140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0929 13:19:01.964801 1174954 kic.go:121] calculated static IP "192.168.49.3" for the "ha-399583-m02" container
	I0929 13:19:01.964878 1174954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:19:01.990942 1174954 cli_runner.go:164] Run: docker volume create ha-399583-m02 --label name.minikube.sigs.k8s.io=ha-399583-m02 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:19:02.014409 1174954 oci.go:103] Successfully created a docker volume ha-399583-m02
	I0929 13:19:02.014494 1174954 cli_runner.go:164] Run: docker run --rm --name ha-399583-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m02 --entrypoint /usr/bin/test -v ha-399583-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:19:02.679202 1174954 oci.go:107] Successfully prepared a docker volume ha-399583-m02
	I0929 13:19:02.679231 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:02.679251 1174954 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:19:02.679325 1174954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:19:07.152375 1174954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473014878s)
	I0929 13:19:07.152405 1174954 kic.go:203] duration metric: took 4.473150412s to extract preloaded images to volume ...
	W0929 13:19:07.152593 1174954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 13:19:07.152711 1174954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:19:07.237258 1174954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-399583-m02 --name ha-399583-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-399583-m02 --network ha-399583 --ip 192.168.49.3 --volume ha-399583-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:19:07.588662 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Running}}
	I0929 13:19:07.610641 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:19:07.639217 1174954 cli_runner.go:164] Run: docker exec ha-399583-m02 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:19:07.689265 1174954 oci.go:144] the created container "ha-399583-m02" has a running status.
	I0929 13:19:07.689290 1174954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa...
	I0929 13:19:08.967590 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0929 13:19:08.967645 1174954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:19:08.993092 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:19:09.018121 1174954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:19:09.018143 1174954 kic_runner.go:114] Args: [docker exec --privileged ha-399583-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:19:09.091238 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:19:09.114872 1174954 machine.go:93] provisionDockerMachine start ...
	I0929 13:19:09.114979 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:09.138454 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:09.138781 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:09.138796 1174954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:19:09.300185 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m02
	
	I0929 13:19:09.300221 1174954 ubuntu.go:182] provisioning hostname "ha-399583-m02"
	I0929 13:19:09.300323 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:09.328396 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:09.329896 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:09.329930 1174954 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-399583-m02 && echo "ha-399583-m02" | sudo tee /etc/hostname
	I0929 13:19:09.523332 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m02
	
	I0929 13:19:09.523435 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:09.574718 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:09.575028 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:09.575051 1174954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-399583-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-399583-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-399583-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:19:09.769956 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:19:09.769989 1174954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 13:19:09.770011 1174954 ubuntu.go:190] setting up certificates
	I0929 13:19:09.770020 1174954 provision.go:84] configureAuth start
	I0929 13:19:09.770081 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m02
	I0929 13:19:09.806137 1174954 provision.go:143] copyHostCerts
	I0929 13:19:09.806189 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:19:09.806223 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 13:19:09.806235 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:19:09.806313 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 13:19:09.806397 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:19:09.806419 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 13:19:09.806425 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:19:09.806453 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 13:19:09.806504 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:19:09.806525 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 13:19:09.806536 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:19:09.806567 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 13:19:09.806620 1174954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.ha-399583-m02 san=[127.0.0.1 192.168.49.3 ha-399583-m02 localhost minikube]
	I0929 13:19:10.866535 1174954 provision.go:177] copyRemoteCerts
	I0929 13:19:10.866611 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:19:10.866660 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:10.889682 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:10.999316 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0929 13:19:10.999393 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 13:19:11.052232 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0929 13:19:11.052300 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:19:11.089337 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0929 13:19:11.089407 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:19:11.119732 1174954 provision.go:87] duration metric: took 1.349696847s to configureAuth
	I0929 13:19:11.119764 1174954 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:19:11.119970 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:11.120035 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:11.155123 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:11.155429 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:11.155445 1174954 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 13:19:11.339035 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 13:19:11.339054 1174954 ubuntu.go:71] root file system type: overlay
	I0929 13:19:11.339178 1174954 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 13:19:11.339247 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:11.371089 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:11.371407 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:11.371490 1174954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 13:19:11.568641 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 13:19:11.568810 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:11.600745 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:11.601041 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:11.601060 1174954 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 13:19:12.995080 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 13:19:11.563331451 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 13:19:12.995110 1174954 machine.go:96] duration metric: took 3.880212273s to provisionDockerMachine
	I0929 13:19:12.995121 1174954 client.go:171] duration metric: took 11.058683687s to LocalClient.Create
	I0929 13:19:12.995134 1174954 start.go:167] duration metric: took 11.058732804s to libmachine.API.Create "ha-399583"
	I0929 13:19:12.995141 1174954 start.go:293] postStartSetup for "ha-399583-m02" (driver="docker")
	I0929 13:19:12.995150 1174954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:19:12.995221 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:19:12.995266 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.030954 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.144283 1174954 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:19:13.148264 1174954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:19:13.148302 1174954 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:19:13.148312 1174954 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:19:13.148319 1174954 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:19:13.148329 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 13:19:13.148388 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 13:19:13.148475 1174954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 13:19:13.148486 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /etc/ssl/certs/11276402.pem
	I0929 13:19:13.148678 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:19:13.161314 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:19:13.194420 1174954 start.go:296] duration metric: took 199.264403ms for postStartSetup
	I0929 13:19:13.194833 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m02
	I0929 13:19:13.222369 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:13.222651 1174954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:19:13.222703 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.244676 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.355271 1174954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:19:13.363160 1174954 start.go:128] duration metric: took 11.43222979s to createHost
	I0929 13:19:13.363190 1174954 start.go:83] releasing machines lock for "ha-399583-m02", held for 11.432372388s
	I0929 13:19:13.363267 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m02
	I0929 13:19:13.398288 1174954 out.go:179] * Found network options:
	I0929 13:19:13.401230 1174954 out.go:179]   - NO_PROXY=192.168.49.2
	W0929 13:19:13.404184 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:19:13.404240 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	I0929 13:19:13.404317 1174954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:19:13.404364 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.404854 1174954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:19:13.404909 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.443677 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.453060 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.715409 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:19:13.762331 1174954 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:19:13.762420 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:19:13.808673 1174954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:19:13.808700 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:19:13.808733 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:19:13.808819 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:19:13.850267 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:19:13.869819 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:19:13.886028 1174954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 13:19:13.886128 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 13:19:13.904204 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:19:13.918915 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:19:13.938078 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:19:13.951190 1174954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:19:13.962822 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:19:13.979866 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:19:14.002687 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:19:14.016197 1174954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:19:14.038356 1174954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:19:14.049913 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:14.222731 1174954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:19:14.395987 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:19:14.396052 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:19:14.396114 1174954 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 13:19:14.421883 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:19:14.441785 1174954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:19:14.491962 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:19:14.513917 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:19:14.536659 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:19:14.568085 1174954 ssh_runner.go:195] Run: which cri-dockerd
	I0929 13:19:14.572700 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 13:19:14.590508 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 13:19:14.625655 1174954 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 13:19:14.786001 1174954 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 13:19:14.949218 1174954 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 13:19:14.949299 1174954 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 13:19:14.970637 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 13:19:14.982306 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:15.114124 1174954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 13:19:15.862209 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:19:15.883370 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 13:19:15.901976 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:19:15.919605 1174954 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 13:19:16.090584 1174954 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 13:19:16.235805 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:16.368385 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 13:19:16.386764 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 13:19:16.399426 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:16.516458 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 13:19:16.669097 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:19:16.685845 1174954 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 13:19:16.685928 1174954 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 13:19:16.694278 1174954 start.go:563] Will wait 60s for crictl version
	I0929 13:19:16.694392 1174954 ssh_runner.go:195] Run: which crictl
	I0929 13:19:16.698498 1174954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:19:16.774768 1174954 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 13:19:16.774852 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:19:16.809187 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:19:16.863090 1174954 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 13:19:16.866031 1174954 out.go:179]   - env NO_PROXY=192.168.49.2
	I0929 13:19:16.868939 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:19:16.892673 1174954 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 13:19:16.896392 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:19:16.912994 1174954 mustload.go:65] Loading cluster: ha-399583
	I0929 13:19:16.913225 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:16.913484 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:16.939184 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:16.939531 1174954 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583 for IP: 192.168.49.3
	I0929 13:19:16.939547 1174954 certs.go:194] generating shared ca certs ...
	I0929 13:19:16.939579 1174954 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:16.939745 1174954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 13:19:16.939789 1174954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 13:19:16.939818 1174954 certs.go:256] generating profile certs ...
	I0929 13:19:16.939936 1174954 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key
	I0929 13:19:16.939986 1174954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547
	I0929 13:19:16.940007 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0929 13:19:17.951806 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547 ...
	I0929 13:19:17.951838 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547: {Name:mk364b0c6a477f0cee6381c4956d3d67e3f29bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:17.952068 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547 ...
	I0929 13:19:17.952087 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547: {Name:mk9ec6fab1a22143f857f5e99f9b70589de081fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:17.952187 1174954 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt
	I0929 13:19:17.952320 1174954 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key
	I0929 13:19:17.952454 1174954 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key
	I0929 13:19:17.952472 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0929 13:19:17.952492 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0929 13:19:17.952520 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0929 13:19:17.952532 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0929 13:19:17.952544 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0929 13:19:17.952555 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0929 13:19:17.952568 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0929 13:19:17.952585 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0929 13:19:17.952633 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 13:19:17.952665 1174954 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 13:19:17.952678 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:19:17.952701 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:19:17.952728 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:19:17.952756 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 13:19:17.952802 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:19:17.952835 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:17.952852 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem -> /usr/share/ca-certificates/1127640.pem
	I0929 13:19:17.952864 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /usr/share/ca-certificates/11276402.pem
	I0929 13:19:17.952921 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:17.978801 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:18.080915 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0929 13:19:18.089902 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0929 13:19:18.105320 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0929 13:19:18.109842 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0929 13:19:18.130354 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0929 13:19:18.135000 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0929 13:19:18.158595 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0929 13:19:18.165771 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0929 13:19:18.191203 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0929 13:19:18.199215 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0929 13:19:18.213343 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0929 13:19:18.217279 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0929 13:19:18.230828 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:19:18.259851 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:19:18.286739 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:19:18.312892 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:19:18.345383 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0929 13:19:18.372738 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:19:18.400929 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:19:18.427527 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:19:18.454148 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:19:18.481656 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 13:19:18.507564 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 13:19:18.534101 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0929 13:19:18.552624 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0929 13:19:18.572657 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0929 13:19:18.591652 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0929 13:19:18.611738 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0929 13:19:18.630245 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0929 13:19:18.649047 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0929 13:19:18.667202 1174954 ssh_runner.go:195] Run: openssl version
	I0929 13:19:18.672965 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 13:19:18.682742 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 13:19:18.693731 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 13:19:18.693795 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 13:19:18.703321 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 13:19:18.713982 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 13:19:18.723979 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 13:19:18.728151 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 13:19:18.728220 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 13:19:18.735565 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:19:18.746664 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:19:18.757113 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:18.761032 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:18.761102 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:18.770731 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:19:18.781204 1174954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:19:18.787911 1174954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:19:18.787974 1174954 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0929 13:19:18.788060 1174954 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-399583-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:19:18.788086 1174954 kube-vip.go:115] generating kube-vip config ...
	I0929 13:19:18.788134 1174954 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0929 13:19:18.802548 1174954 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:19:18.802611 1174954 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0929 13:19:18.802674 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:19:18.814009 1174954 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:19:18.814081 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0929 13:19:18.827223 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 13:19:18.855545 1174954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:19:18.882423 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0929 13:19:18.901554 1174954 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0929 13:19:18.905614 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:19:18.917838 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:19.018807 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:19:19.037592 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:19.037950 1174954 start.go:317] joinCluster: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:19:19.038083 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0929 13:19:19.038202 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:19.060470 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:19.239338 1174954 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:19.239391 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token us5i83.fcf01pewvpqcb5lq --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0929 13:19:48.136076 1174954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token us5i83.fcf01pewvpqcb5lq --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (28.896663228s)
	I0929 13:19:48.136109 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0929 13:19:48.397873 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-399583-m02 minikube.k8s.io/updated_at=2025_09_29T13_19_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=ha-399583 minikube.k8s.io/primary=false
	I0929 13:19:48.512172 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-399583-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0929 13:19:48.620359 1174954 start.go:319] duration metric: took 29.582405213s to joinCluster
	I0929 13:19:48.620425 1174954 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:48.620755 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:48.623321 1174954 out.go:179] * Verifying Kubernetes components...
	I0929 13:19:48.626203 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:48.735921 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:19:48.753049 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0929 13:19:48.753125 1174954 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0929 13:19:48.753358 1174954 node_ready.go:35] waiting up to 6m0s for node "ha-399583-m02" to be "Ready" ...
	W0929 13:19:50.757486 1174954 node_ready.go:57] node "ha-399583-m02" has "Ready":"False" status (will retry)
	W0929 13:19:53.257719 1174954 node_ready.go:57] node "ha-399583-m02" has "Ready":"False" status (will retry)
	I0929 13:19:53.757841 1174954 node_ready.go:49] node "ha-399583-m02" is "Ready"
	I0929 13:19:53.757872 1174954 node_ready.go:38] duration metric: took 5.004492285s for node "ha-399583-m02" to be "Ready" ...
	I0929 13:19:53.757889 1174954 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:19:53.757950 1174954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:19:53.769589 1174954 api_server.go:72] duration metric: took 5.149124119s to wait for apiserver process to appear ...
	I0929 13:19:53.769620 1174954 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:19:53.769640 1174954 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 13:19:53.778508 1174954 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 13:19:53.779866 1174954 api_server.go:141] control plane version: v1.34.0
	I0929 13:19:53.779890 1174954 api_server.go:131] duration metric: took 10.263816ms to wait for apiserver health ...
	I0929 13:19:53.779899 1174954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:19:53.786312 1174954 system_pods.go:59] 17 kube-system pods found
	I0929 13:19:53.786353 1174954 system_pods.go:61] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:19:53.786361 1174954 system_pods.go:61] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:19:53.786371 1174954 system_pods.go:61] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:19:53.786375 1174954 system_pods.go:61] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Pending
	I0929 13:19:53.786380 1174954 system_pods.go:61] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:19:53.786387 1174954 system_pods.go:61] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-dst2d": pod kindnet-dst2d is already assigned to node "ha-399583-m02")
	I0929 13:19:53.786393 1174954 system_pods.go:61] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:19:53.786402 1174954 system_pods.go:61] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Pending
	I0929 13:19:53.786408 1174954 system_pods.go:61] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:19:53.786418 1174954 system_pods.go:61] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Pending
	I0929 13:19:53.786426 1174954 system_pods.go:61] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-2cb75": pod kube-proxy-2cb75 is already assigned to node "ha-399583-m02")
	I0929 13:19:53.786437 1174954 system_pods.go:61] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:19:53.786452 1174954 system_pods.go:61] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:19:53.786459 1174954 system_pods.go:61] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Pending
	I0929 13:19:53.786464 1174954 system_pods.go:61] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:19:53.786468 1174954 system_pods.go:61] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Pending
	I0929 13:19:53.786473 1174954 system_pods.go:61] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:19:53.786485 1174954 system_pods.go:74] duration metric: took 6.569114ms to wait for pod list to return data ...
	I0929 13:19:53.786498 1174954 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:19:53.791275 1174954 default_sa.go:45] found service account: "default"
	I0929 13:19:53.791347 1174954 default_sa.go:55] duration metric: took 4.840948ms for default service account to be created ...
	I0929 13:19:53.791374 1174954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:19:53.795778 1174954 system_pods.go:86] 17 kube-system pods found
	I0929 13:19:53.795813 1174954 system_pods.go:89] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:19:53.795820 1174954 system_pods.go:89] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:19:53.795825 1174954 system_pods.go:89] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:19:53.795829 1174954 system_pods.go:89] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Pending
	I0929 13:19:53.795833 1174954 system_pods.go:89] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:19:53.795841 1174954 system_pods.go:89] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-dst2d": pod kindnet-dst2d is already assigned to node "ha-399583-m02")
	I0929 13:19:53.795847 1174954 system_pods.go:89] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:19:53.795853 1174954 system_pods.go:89] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Pending
	I0929 13:19:53.795857 1174954 system_pods.go:89] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:19:53.795862 1174954 system_pods.go:89] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Pending
	I0929 13:19:53.795868 1174954 system_pods.go:89] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-2cb75": pod kube-proxy-2cb75 is already assigned to node "ha-399583-m02")
	I0929 13:19:53.795873 1174954 system_pods.go:89] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:19:53.795878 1174954 system_pods.go:89] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:19:53.795885 1174954 system_pods.go:89] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Pending
	I0929 13:19:53.795890 1174954 system_pods.go:89] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:19:53.795910 1174954 system_pods.go:89] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Pending
	I0929 13:19:53.795914 1174954 system_pods.go:89] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:19:53.795921 1174954 system_pods.go:126] duration metric: took 4.529075ms to wait for k8s-apps to be running ...
	I0929 13:19:53.795933 1174954 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:19:53.795993 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:19:53.808623 1174954 system_svc.go:56] duration metric: took 12.681804ms WaitForService to wait for kubelet
	I0929 13:19:53.808652 1174954 kubeadm.go:578] duration metric: took 5.188191498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:19:53.808672 1174954 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:19:53.812712 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:19:53.812795 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:19:53.812822 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:19:53.812840 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:19:53.812872 1174954 node_conditions.go:105] duration metric: took 4.19349ms to run NodePressure ...
	I0929 13:19:53.812903 1174954 start.go:241] waiting for startup goroutines ...
	I0929 13:19:53.812960 1174954 start.go:255] writing updated cluster config ...
	I0929 13:19:53.816405 1174954 out.go:203] 
	I0929 13:19:53.819624 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:53.819804 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:53.823289 1174954 out.go:179] * Starting "ha-399583-m03" control-plane node in "ha-399583" cluster
	I0929 13:19:53.826271 1174954 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:19:53.830153 1174954 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:19:53.833398 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:53.833513 1174954 cache.go:58] Caching tarball of preloaded images
	I0929 13:19:53.833477 1174954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:19:53.833852 1174954 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 13:19:53.833895 1174954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 13:19:53.834066 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:53.864753 1174954 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:19:53.864773 1174954 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:19:53.864786 1174954 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:19:53.864810 1174954 start.go:360] acquireMachinesLock for ha-399583-m03: {Name:mk2b898fb28e1dbc9512aed087b03adf147176a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:19:53.864913 1174954 start.go:364] duration metric: took 89.24µs to acquireMachinesLock for "ha-399583-m03"
	I0929 13:19:53.864938 1174954 start.go:93] Provisioning new machine with config: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:53.865038 1174954 start.go:125] createHost starting for "m03" (driver="docker")
	I0929 13:19:53.868470 1174954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:19:53.868592 1174954 start.go:159] libmachine.API.Create for "ha-399583" (driver="docker")
	I0929 13:19:53.868621 1174954 client.go:168] LocalClient.Create starting
	I0929 13:19:53.868686 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 13:19:53.868719 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:53.868732 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:53.868785 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 13:19:53.868806 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:53.868817 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:53.869050 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:19:53.890019 1174954 network_create.go:77] Found existing network {name:ha-399583 subnet:0x400015a330 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0929 13:19:53.890056 1174954 kic.go:121] calculated static IP "192.168.49.4" for the "ha-399583-m03" container
	I0929 13:19:53.890345 1174954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:19:53.910039 1174954 cli_runner.go:164] Run: docker volume create ha-399583-m03 --label name.minikube.sigs.k8s.io=ha-399583-m03 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:19:53.933513 1174954 oci.go:103] Successfully created a docker volume ha-399583-m03
	I0929 13:19:53.933599 1174954 cli_runner.go:164] Run: docker run --rm --name ha-399583-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m03 --entrypoint /usr/bin/test -v ha-399583-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:19:54.624336 1174954 oci.go:107] Successfully prepared a docker volume ha-399583-m03
	I0929 13:19:54.624378 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:54.624399 1174954 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:19:54.624489 1174954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:19:58.913405 1174954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.288878633s)
	I0929 13:19:58.913439 1174954 kic.go:203] duration metric: took 4.289036076s to extract preloaded images to volume ...
	W0929 13:19:58.913581 1174954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 13:19:58.913697 1174954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:19:59.026434 1174954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-399583-m03 --name ha-399583-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-399583-m03 --network ha-399583 --ip 192.168.49.4 --volume ha-399583-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:19:59.421197 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Running}}
	I0929 13:19:59.449045 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Status}}
	I0929 13:19:59.475768 1174954 cli_runner.go:164] Run: docker exec ha-399583-m03 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:19:59.539552 1174954 oci.go:144] the created container "ha-399583-m03" has a running status.
	I0929 13:19:59.539579 1174954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa...
	I0929 13:20:00.165439 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0929 13:20:00.165493 1174954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:20:00.230597 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Status}}
	I0929 13:20:00.279749 1174954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:20:00.279772 1174954 kic_runner.go:114] Args: [docker exec --privileged ha-399583-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:20:00.497905 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Status}}
	I0929 13:20:00.541566 1174954 machine.go:93] provisionDockerMachine start ...
	I0929 13:20:00.541686 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:00.580321 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:00.580713 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:00.580737 1174954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:20:00.832036 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m03
	
	I0929 13:20:00.832064 1174954 ubuntu.go:182] provisioning hostname "ha-399583-m03"
	I0929 13:20:00.832134 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:00.861349 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:00.861679 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:00.861696 1174954 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-399583-m03 && echo "ha-399583-m03" | sudo tee /etc/hostname
	I0929 13:20:01.047493 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m03
	
	I0929 13:20:01.047588 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:01.079010 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:01.079315 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:01.079337 1174954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-399583-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-399583-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-399583-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:20:01.243015 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:20:01.243044 1174954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 13:20:01.243061 1174954 ubuntu.go:190] setting up certificates
	I0929 13:20:01.243072 1174954 provision.go:84] configureAuth start
	I0929 13:20:01.243139 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m03
	I0929 13:20:01.265242 1174954 provision.go:143] copyHostCerts
	I0929 13:20:01.265290 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:20:01.265326 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 13:20:01.265341 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:20:01.265419 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 13:20:01.265507 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:20:01.265532 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 13:20:01.265539 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:20:01.265580 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 13:20:01.265627 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:20:01.265649 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 13:20:01.265656 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:20:01.265681 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 13:20:01.265732 1174954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.ha-399583-m03 san=[127.0.0.1 192.168.49.4 ha-399583-m03 localhost minikube]
	I0929 13:20:02.210993 1174954 provision.go:177] copyRemoteCerts
	I0929 13:20:02.211070 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:20:02.211117 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.235070 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:02.342243 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0929 13:20:02.342309 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:20:02.370693 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0929 13:20:02.370758 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 13:20:02.406117 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0929 13:20:02.406193 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:20:02.433664 1174954 provision.go:87] duration metric: took 1.190577158s to configureAuth
	I0929 13:20:02.433695 1174954 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:20:02.433929 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:20:02.433990 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.452035 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:02.452357 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:02.452371 1174954 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 13:20:02.597297 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 13:20:02.597322 1174954 ubuntu.go:71] root file system type: overlay
	I0929 13:20:02.597429 1174954 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 13:20:02.597505 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.616941 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:02.617971 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:02.618086 1174954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 13:20:02.779351 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 13:20:02.779455 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.799223 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:02.799534 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:02.799557 1174954 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 13:20:03.735315 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 13:20:02.775888663 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 13:20:03.735352 1174954 machine.go:96] duration metric: took 3.193757868s to provisionDockerMachine
	I0929 13:20:03.735363 1174954 client.go:171] duration metric: took 9.866735605s to LocalClient.Create
	I0929 13:20:03.735376 1174954 start.go:167] duration metric: took 9.866785559s to libmachine.API.Create "ha-399583"
	I0929 13:20:03.735383 1174954 start.go:293] postStartSetup for "ha-399583-m03" (driver="docker")
	I0929 13:20:03.735394 1174954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:20:03.735469 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:20:03.735514 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:03.756038 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:03.865155 1174954 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:20:03.869100 1174954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:20:03.869131 1174954 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:20:03.869150 1174954 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:20:03.869157 1174954 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:20:03.869167 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 13:20:03.869229 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 13:20:03.869304 1174954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 13:20:03.869311 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /etc/ssl/certs/11276402.pem
	I0929 13:20:03.869412 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:20:03.879291 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:20:03.910042 1174954 start.go:296] duration metric: took 174.64236ms for postStartSetup
	I0929 13:20:03.910412 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m03
	I0929 13:20:03.929549 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:20:03.929853 1174954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:20:03.929910 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:03.946942 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:04.045660 1174954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:20:04.050696 1174954 start.go:128] duration metric: took 10.185641501s to createHost
	I0929 13:20:04.050721 1174954 start.go:83] releasing machines lock for "ha-399583-m03", held for 10.185799967s
	I0929 13:20:04.050794 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m03
	I0929 13:20:04.072117 1174954 out.go:179] * Found network options:
	I0929 13:20:04.075044 1174954 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0929 13:20:04.078038 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:20:04.078065 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:20:04.078091 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:20:04.078104 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	I0929 13:20:04.078176 1174954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:20:04.078218 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:04.078237 1174954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:20:04.078291 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:04.097469 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:04.098230 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:04.193075 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:20:04.335686 1174954 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:20:04.335795 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:20:04.368922 1174954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:20:04.369007 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:20:04.369068 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:20:04.369232 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:20:04.387484 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:20:04.398856 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:20:04.409189 1174954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 13:20:04.409264 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 13:20:04.419854 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:20:04.430371 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:20:04.440576 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:20:04.451380 1174954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:20:04.461631 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:20:04.472007 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:20:04.481706 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:20:04.491487 1174954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:20:04.499820 1174954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:20:04.508580 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:04.598502 1174954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:20:04.700077 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:20:04.700124 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:20:04.700177 1174954 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 13:20:04.714663 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:20:04.729615 1174954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:20:04.775282 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:20:04.788800 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:20:04.801856 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:20:04.819869 1174954 ssh_runner.go:195] Run: which cri-dockerd
	I0929 13:20:04.825237 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 13:20:04.837176 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 13:20:04.856412 1174954 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 13:20:04.956606 1174954 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 13:20:05.046952 1174954 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 13:20:05.047052 1174954 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 13:20:05.068496 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 13:20:05.081179 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:05.188442 1174954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 13:20:05.641030 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:20:05.658091 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 13:20:05.674159 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:20:05.688899 1174954 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 13:20:05.794834 1174954 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 13:20:05.898750 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:06.004134 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 13:20:06.021207 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 13:20:06.033824 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:06.131795 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 13:20:06.204688 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:20:06.217218 1174954 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 13:20:06.217300 1174954 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 13:20:06.230161 1174954 start.go:563] Will wait 60s for crictl version
	I0929 13:20:06.230229 1174954 ssh_runner.go:195] Run: which crictl
	I0929 13:20:06.233825 1174954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:20:06.276251 1174954 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 13:20:06.276321 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:20:06.300978 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:20:06.340349 1174954 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 13:20:06.343223 1174954 out.go:179]   - env NO_PROXY=192.168.49.2
	I0929 13:20:06.346129 1174954 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0929 13:20:06.349043 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:20:06.377719 1174954 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 13:20:06.388776 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:20:06.411154 1174954 mustload.go:65] Loading cluster: ha-399583
	I0929 13:20:06.411394 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:20:06.411640 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:20:06.430528 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:20:06.430841 1174954 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583 for IP: 192.168.49.4
	I0929 13:20:06.430850 1174954 certs.go:194] generating shared ca certs ...
	I0929 13:20:06.430866 1174954 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:20:06.430997 1174954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 13:20:06.431058 1174954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 13:20:06.431074 1174954 certs.go:256] generating profile certs ...
	I0929 13:20:06.431168 1174954 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key
	I0929 13:20:06.431196 1174954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa
	I0929 13:20:06.431210 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0929 13:20:06.888217 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa ...
	I0929 13:20:06.888265 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa: {Name:mk683375c282b9fb5dafe4bb714d1d87fd779b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:20:06.888467 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa ...
	I0929 13:20:06.888487 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa: {Name:mk032598528855acdbae9e710bee9e27a0f4170b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:20:06.888608 1174954 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt
	I0929 13:20:06.888742 1174954 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key
	I0929 13:20:06.888881 1174954 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key
	I0929 13:20:06.888899 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0929 13:20:06.888916 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0929 13:20:06.888932 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0929 13:20:06.888946 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0929 13:20:06.888962 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0929 13:20:06.888979 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0929 13:20:06.888996 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0929 13:20:06.889007 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0929 13:20:06.889078 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 13:20:06.889110 1174954 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 13:20:06.889126 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:20:06.889150 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:20:06.889179 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:20:06.889206 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 13:20:06.889252 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:20:06.889284 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem -> /usr/share/ca-certificates/1127640.pem
	I0929 13:20:06.889299 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /usr/share/ca-certificates/11276402.pem
	I0929 13:20:06.889311 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:06.889372 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:20:06.907926 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:20:07.012906 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0929 13:20:07.017271 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0929 13:20:07.030791 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0929 13:20:07.034486 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0929 13:20:07.047662 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0929 13:20:07.051627 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0929 13:20:07.065782 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0929 13:20:07.069487 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0929 13:20:07.082405 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0929 13:20:07.086808 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0929 13:20:07.099378 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0929 13:20:07.102876 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0929 13:20:07.115722 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:20:07.144712 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:20:07.171578 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:20:07.196661 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:20:07.227601 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0929 13:20:07.261933 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:20:07.288317 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:20:07.313720 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:20:07.351784 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 13:20:07.378975 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 13:20:07.409020 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:20:07.435100 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0929 13:20:07.455821 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0929 13:20:07.476986 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0929 13:20:07.496799 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0929 13:20:07.515486 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0929 13:20:07.534374 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0929 13:20:07.552866 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0929 13:20:07.573277 1174954 ssh_runner.go:195] Run: openssl version
	I0929 13:20:07.578770 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 13:20:07.588964 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 13:20:07.592719 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 13:20:07.592798 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 13:20:07.599718 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 13:20:07.609344 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 13:20:07.618729 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 13:20:07.622461 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 13:20:07.622531 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 13:20:07.630086 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:20:07.640458 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:20:07.650309 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:07.655064 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:07.655164 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:07.662942 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:20:07.673946 1174954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:20:07.677370 1174954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:20:07.677449 1174954 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0929 13:20:07.677546 1174954 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-399583-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:20:07.677575 1174954 kube-vip.go:115] generating kube-vip config ...
	I0929 13:20:07.677629 1174954 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0929 13:20:07.691348 1174954 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:20:07.691407 1174954 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0929 13:20:07.691467 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:20:07.700828 1174954 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:20:07.700902 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0929 13:20:07.709803 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 13:20:07.732991 1174954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:20:07.756090 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0929 13:20:07.780455 1174954 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0929 13:20:07.783999 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:20:07.795880 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:07.897249 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:20:07.917136 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:20:07.917408 1174954 start.go:317] joinCluster: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs
: AutoPauseInterval:1m0s}
	I0929 13:20:07.917533 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0929 13:20:07.917588 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:20:07.943812 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:20:08.133033 1174954 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:20:08.133084 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token swip8t.e0aypxfy2bq39z8n --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0929 13:20:30.840201 1174954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token swip8t.e0aypxfy2bq39z8n --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (22.707096684s)
	I0929 13:20:30.840228 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0929 13:20:31.161795 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-399583-m03 minikube.k8s.io/updated_at=2025_09_29T13_20_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=ha-399583 minikube.k8s.io/primary=false
	I0929 13:20:31.312820 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-399583-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0929 13:20:31.514629 1174954 start.go:319] duration metric: took 23.597216979s to joinCluster
	I0929 13:20:31.514697 1174954 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:20:31.515097 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:20:31.517922 1174954 out.go:179] * Verifying Kubernetes components...
	I0929 13:20:31.520826 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:31.639377 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:20:31.654322 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0929 13:20:31.654409 1174954 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0929 13:20:31.654714 1174954 node_ready.go:35] waiting up to 6m0s for node "ha-399583-m03" to be "Ready" ...
	W0929 13:20:33.658813 1174954 node_ready.go:57] node "ha-399583-m03" has "Ready":"False" status (will retry)
	W0929 13:20:35.658996 1174954 node_ready.go:57] node "ha-399583-m03" has "Ready":"False" status (will retry)
	I0929 13:20:37.662695 1174954 node_ready.go:49] node "ha-399583-m03" is "Ready"
	I0929 13:20:37.662721 1174954 node_ready.go:38] duration metric: took 6.007985311s for node "ha-399583-m03" to be "Ready" ...
	I0929 13:20:37.662738 1174954 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:20:37.662801 1174954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:20:37.678329 1174954 api_server.go:72] duration metric: took 6.163563537s to wait for apiserver process to appear ...
	I0929 13:20:37.678353 1174954 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:20:37.678372 1174954 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 13:20:37.687228 1174954 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 13:20:37.688407 1174954 api_server.go:141] control plane version: v1.34.0
	I0929 13:20:37.688435 1174954 api_server.go:131] duration metric: took 10.075523ms to wait for apiserver health ...
	I0929 13:20:37.688446 1174954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:20:37.698097 1174954 system_pods.go:59] 26 kube-system pods found
	I0929 13:20:37.698143 1174954 system_pods.go:61] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:20:37.698150 1174954 system_pods.go:61] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:20:37.698155 1174954 system_pods.go:61] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:20:37.698159 1174954 system_pods.go:61] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Running
	I0929 13:20:37.698163 1174954 system_pods.go:61] "etcd-ha-399583-m03" [298d72e2-060d-4074-8a25-cfc31af03292] Pending
	I0929 13:20:37.698169 1174954 system_pods.go:61] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:20:37.698174 1174954 system_pods.go:61] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Running
	I0929 13:20:37.698183 1174954 system_pods.go:61] "kindnet-kdnjz" [f4e5b82e-c2b4-4626-9ad2-6133725cd817] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kdnjz": pod kindnet-kdnjz is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698196 1174954 system_pods.go:61] "kindnet-kvb6m" [da918fb5-7c31-41f6-9ea5-63dbb244c5e8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kvb6m": pod kindnet-kvb6m is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698212 1174954 system_pods.go:61] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:20:37.698217 1174954 system_pods.go:61] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Running
	I0929 13:20:37.698222 1174954 system_pods.go:61] "kube-apiserver-ha-399583-m03" [7ce088c0-c8d6-4bbb-9a95-f8600716104a] Pending
	I0929 13:20:37.698226 1174954 system_pods.go:61] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:20:37.698236 1174954 system_pods.go:61] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Running
	I0929 13:20:37.698242 1174954 system_pods.go:61] "kube-controller-manager-ha-399583-m03" [da73157c-5019-4406-8e9a-fad730cbf2e1] Pending
	I0929 13:20:37.698247 1174954 system_pods.go:61] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Running
	I0929 13:20:37.698259 1174954 system_pods.go:61] "kube-proxy-cpdlp" [9ba5e634-5db2-4592-98d3-cd8afa30cf47] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-cpdlp": pod kube-proxy-cpdlp is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698264 1174954 system_pods.go:61] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:20:37.698270 1174954 system_pods.go:61] "kube-proxy-sntfr" [0c4e5a57-5d35-4f9c-aaa5-2f0ba1d88138] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-sntfr": pod kube-proxy-sntfr is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698275 1174954 system_pods.go:61] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:20:37.698283 1174954 system_pods.go:61] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Running
	I0929 13:20:37.698288 1174954 system_pods.go:61] "kube-scheduler-ha-399583-m03" [3484019f-984c-41da-9b65-5ce66f587a8b] Pending
	I0929 13:20:37.698292 1174954 system_pods.go:61] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:20:37.698304 1174954 system_pods.go:61] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Running
	I0929 13:20:37.698308 1174954 system_pods.go:61] "kube-vip-ha-399583-m03" [7cab35ed-6974-4a9f-8a92-bb53e3846a72] Pending
	I0929 13:20:37.698312 1174954 system_pods.go:61] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:20:37.698324 1174954 system_pods.go:74] duration metric: took 9.871631ms to wait for pod list to return data ...
	I0929 13:20:37.698332 1174954 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:20:37.701677 1174954 default_sa.go:45] found service account: "default"
	I0929 13:20:37.701703 1174954 default_sa.go:55] duration metric: took 3.360619ms for default service account to be created ...
	I0929 13:20:37.701713 1174954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:20:37.707400 1174954 system_pods.go:86] 26 kube-system pods found
	I0929 13:20:37.707438 1174954 system_pods.go:89] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:20:37.707446 1174954 system_pods.go:89] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:20:37.707451 1174954 system_pods.go:89] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:20:37.707455 1174954 system_pods.go:89] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Running
	I0929 13:20:37.707459 1174954 system_pods.go:89] "etcd-ha-399583-m03" [298d72e2-060d-4074-8a25-cfc31af03292] Pending
	I0929 13:20:37.707463 1174954 system_pods.go:89] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:20:37.707468 1174954 system_pods.go:89] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Running
	I0929 13:20:37.707474 1174954 system_pods.go:89] "kindnet-kdnjz" [f4e5b82e-c2b4-4626-9ad2-6133725cd817] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kdnjz": pod kindnet-kdnjz is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707481 1174954 system_pods.go:89] "kindnet-kvb6m" [da918fb5-7c31-41f6-9ea5-63dbb244c5e8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kvb6m": pod kindnet-kvb6m is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707486 1174954 system_pods.go:89] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:20:37.707493 1174954 system_pods.go:89] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Running
	I0929 13:20:37.707501 1174954 system_pods.go:89] "kube-apiserver-ha-399583-m03" [7ce088c0-c8d6-4bbb-9a95-f8600716104a] Pending
	I0929 13:20:37.707505 1174954 system_pods.go:89] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:20:37.707509 1174954 system_pods.go:89] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Running
	I0929 13:20:37.707513 1174954 system_pods.go:89] "kube-controller-manager-ha-399583-m03" [da73157c-5019-4406-8e9a-fad730cbf2e1] Pending
	I0929 13:20:37.707519 1174954 system_pods.go:89] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Running
	I0929 13:20:37.707525 1174954 system_pods.go:89] "kube-proxy-cpdlp" [9ba5e634-5db2-4592-98d3-cd8afa30cf47] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-cpdlp": pod kube-proxy-cpdlp is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707535 1174954 system_pods.go:89] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:20:37.707542 1174954 system_pods.go:89] "kube-proxy-sntfr" [0c4e5a57-5d35-4f9c-aaa5-2f0ba1d88138] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-sntfr": pod kube-proxy-sntfr is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707546 1174954 system_pods.go:89] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:20:37.707553 1174954 system_pods.go:89] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Running
	I0929 13:20:37.707561 1174954 system_pods.go:89] "kube-scheduler-ha-399583-m03" [3484019f-984c-41da-9b65-5ce66f587a8b] Pending
	I0929 13:20:37.707569 1174954 system_pods.go:89] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:20:37.707573 1174954 system_pods.go:89] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Running
	I0929 13:20:37.707579 1174954 system_pods.go:89] "kube-vip-ha-399583-m03" [7cab35ed-6974-4a9f-8a92-bb53e3846a72] Pending
	I0929 13:20:37.707585 1174954 system_pods.go:89] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:20:37.707595 1174954 system_pods.go:126] duration metric: took 5.876446ms to wait for k8s-apps to be running ...
	I0929 13:20:37.707604 1174954 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:20:37.707665 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:20:37.722604 1174954 system_svc.go:56] duration metric: took 14.990547ms WaitForService to wait for kubelet
	I0929 13:20:37.722631 1174954 kubeadm.go:578] duration metric: took 6.207905788s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:20:37.722658 1174954 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:20:37.726306 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:20:37.726334 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:20:37.726358 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:20:37.726363 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:20:37.726368 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:20:37.726373 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:20:37.726378 1174954 node_conditions.go:105] duration metric: took 3.713854ms to run NodePressure ...
	I0929 13:20:37.726391 1174954 start.go:241] waiting for startup goroutines ...
	I0929 13:20:37.726413 1174954 start.go:255] writing updated cluster config ...
	I0929 13:20:37.726766 1174954 ssh_runner.go:195] Run: rm -f paused
	I0929 13:20:37.730586 1174954 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:20:37.731091 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 13:20:37.752334 1174954 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5dqqj" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.759262 1174954 pod_ready.go:94] pod "coredns-66bc5c9577-5dqqj" is "Ready"
	I0929 13:20:37.759286 1174954 pod_ready.go:86] duration metric: took 6.832295ms for pod "coredns-66bc5c9577-5dqqj" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.759296 1174954 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p6v89" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.765860 1174954 pod_ready.go:94] pod "coredns-66bc5c9577-p6v89" is "Ready"
	I0929 13:20:37.765885 1174954 pod_ready.go:86] duration metric: took 6.583186ms for pod "coredns-66bc5c9577-p6v89" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.769329 1174954 pod_ready.go:83] waiting for pod "etcd-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.776112 1174954 pod_ready.go:94] pod "etcd-ha-399583" is "Ready"
	I0929 13:20:37.776142 1174954 pod_ready.go:86] duration metric: took 6.782999ms for pod "etcd-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.776160 1174954 pod_ready.go:83] waiting for pod "etcd-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.784277 1174954 pod_ready.go:94] pod "etcd-ha-399583-m02" is "Ready"
	I0929 13:20:37.784347 1174954 pod_ready.go:86] duration metric: took 8.177065ms for pod "etcd-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.784371 1174954 pod_ready.go:83] waiting for pod "etcd-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.931760 1174954 request.go:683] "Waited before sending request" delay="147.239674ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-399583-m03"
	I0929 13:20:38.131779 1174954 request.go:683] "Waited before sending request" delay="196.159173ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:38.532543 1174954 request.go:683] "Waited before sending request" delay="195.316686ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:38.535802 1174954 pod_ready.go:94] pod "etcd-ha-399583-m03" is "Ready"
	I0929 13:20:38.535831 1174954 pod_ready.go:86] duration metric: took 751.441723ms for pod "etcd-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:38.732127 1174954 request.go:683] "Waited before sending request" delay="196.195588ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0929 13:20:38.736197 1174954 pod_ready.go:83] waiting for pod "kube-apiserver-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:38.932435 1174954 request.go:683] "Waited before sending request" delay="196.132523ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-399583"
	I0929 13:20:39.132519 1174954 request.go:683] "Waited before sending request" delay="196.441794ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583"
	I0929 13:20:39.136200 1174954 pod_ready.go:94] pod "kube-apiserver-ha-399583" is "Ready"
	I0929 13:20:39.136264 1174954 pod_ready.go:86] duration metric: took 400.038595ms for pod "kube-apiserver-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.136279 1174954 pod_ready.go:83] waiting for pod "kube-apiserver-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.332607 1174954 request.go:683] "Waited before sending request" delay="196.169086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-399583-m02"
	I0929 13:20:39.532382 1174954 request.go:683] "Waited before sending request" delay="194.191784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m02"
	I0929 13:20:39.553510 1174954 pod_ready.go:94] pod "kube-apiserver-ha-399583-m02" is "Ready"
	I0929 13:20:39.553537 1174954 pod_ready.go:86] duration metric: took 417.249893ms for pod "kube-apiserver-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.553548 1174954 pod_ready.go:83] waiting for pod "kube-apiserver-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.731945 1174954 request.go:683] "Waited before sending request" delay="178.300272ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-399583-m03"
	I0929 13:20:39.932415 1174954 request.go:683] "Waited before sending request" delay="196.153717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:39.936788 1174954 pod_ready.go:94] pod "kube-apiserver-ha-399583-m03" is "Ready"
	I0929 13:20:39.936817 1174954 pod_ready.go:86] duration metric: took 383.262142ms for pod "kube-apiserver-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.132200 1174954 request.go:683] "Waited before sending request" delay="195.243602ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0929 13:20:40.136932 1174954 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.332188 1174954 request.go:683] "Waited before sending request" delay="195.095556ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-399583"
	I0929 13:20:40.532350 1174954 request.go:683] "Waited before sending request" delay="194.129072ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583"
	I0929 13:20:40.535503 1174954 pod_ready.go:94] pod "kube-controller-manager-ha-399583" is "Ready"
	I0929 13:20:40.535532 1174954 pod_ready.go:86] duration metric: took 398.517332ms for pod "kube-controller-manager-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.535543 1174954 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.731836 1174954 request.go:683] "Waited before sending request" delay="196.209078ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-399583-m02"
	I0929 13:20:40.931591 1174954 request.go:683] "Waited before sending request" delay="196.130087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m02"
	I0929 13:20:40.937829 1174954 pod_ready.go:94] pod "kube-controller-manager-ha-399583-m02" is "Ready"
	I0929 13:20:40.937859 1174954 pod_ready.go:86] duration metric: took 402.309636ms for pod "kube-controller-manager-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.937869 1174954 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.132415 1174954 request.go:683] "Waited before sending request" delay="194.463278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-399583-m03"
	I0929 13:20:41.332667 1174954 request.go:683] "Waited before sending request" delay="195.238162ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:41.335801 1174954 pod_ready.go:94] pod "kube-controller-manager-ha-399583-m03" is "Ready"
	I0929 13:20:41.335830 1174954 pod_ready.go:86] duration metric: took 397.953158ms for pod "kube-controller-manager-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.532274 1174954 request.go:683] "Waited before sending request" delay="196.318601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0929 13:20:41.536572 1174954 pod_ready.go:83] waiting for pod "kube-proxy-2cb75" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.732029 1174954 request.go:683] "Waited before sending request" delay="195.329363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2cb75"
	I0929 13:20:41.931901 1174954 request.go:683] "Waited before sending request" delay="196.155875ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m02"
	I0929 13:20:41.935289 1174954 pod_ready.go:94] pod "kube-proxy-2cb75" is "Ready"
	I0929 13:20:41.935368 1174954 pod_ready.go:86] duration metric: took 398.750909ms for pod "kube-proxy-2cb75" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.935391 1174954 pod_ready.go:83] waiting for pod "kube-proxy-s2d46" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:42.131780 1174954 request.go:683] "Waited before sending request" delay="196.277501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2d46"
	I0929 13:20:42.331524 1174954 request.go:683] "Waited before sending request" delay="194.302628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583"
	I0929 13:20:42.348523 1174954 pod_ready.go:94] pod "kube-proxy-s2d46" is "Ready"
	I0929 13:20:42.348592 1174954 pod_ready.go:86] duration metric: took 413.180326ms for pod "kube-proxy-s2d46" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:42.348616 1174954 pod_ready.go:83] waiting for pod "kube-proxy-sntfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:42.532073 1174954 request.go:683] "Waited before sending request" delay="183.332162ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sntfr"
	I0929 13:20:42.732175 1174954 request.go:683] "Waited before sending request" delay="196.130627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:42.931871 1174954 request.go:683] "Waited before sending request" delay="82.217461ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sntfr"
	I0929 13:20:43.131847 1174954 request.go:683] "Waited before sending request" delay="196.336324ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:43.532015 1174954 request.go:683] "Waited before sending request" delay="179.206866ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:43.932030 1174954 request.go:683] "Waited before sending request" delay="79.164046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	W0929 13:20:44.355878 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	W0929 13:20:46.362030 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	W0929 13:20:48.855313 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	W0929 13:20:50.855820 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	I0929 13:20:51.855721 1174954 pod_ready.go:94] pod "kube-proxy-sntfr" is "Ready"
	I0929 13:20:51.855753 1174954 pod_ready.go:86] duration metric: took 9.507117999s for pod "kube-proxy-sntfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.860987 1174954 pod_ready.go:83] waiting for pod "kube-scheduler-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.869018 1174954 pod_ready.go:94] pod "kube-scheduler-ha-399583" is "Ready"
	I0929 13:20:51.869050 1174954 pod_ready.go:86] duration metric: took 8.033039ms for pod "kube-scheduler-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.869059 1174954 pod_ready.go:83] waiting for pod "kube-scheduler-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.877266 1174954 pod_ready.go:94] pod "kube-scheduler-ha-399583-m02" is "Ready"
	I0929 13:20:51.877293 1174954 pod_ready.go:86] duration metric: took 8.227437ms for pod "kube-scheduler-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.877303 1174954 pod_ready.go:83] waiting for pod "kube-scheduler-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.882444 1174954 pod_ready.go:94] pod "kube-scheduler-ha-399583-m03" is "Ready"
	I0929 13:20:51.882471 1174954 pod_ready.go:86] duration metric: took 5.161484ms for pod "kube-scheduler-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.882483 1174954 pod_ready.go:40] duration metric: took 14.15186681s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:20:51.957469 1174954 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 13:20:51.962421 1174954 out.go:179] * Done! kubectl is now configured to use "ha-399583" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 13:19:04 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:04Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:04 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:04Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:05 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:05Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:06 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:06Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 29 13:19:09 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 29 13:19:16 ha-399583 dockerd[1187]: time="2025-09-29T13:19:16.675094787Z" level=info msg="ignoring event" container=f9a485d796f1697bb95b77b506d6d7d33a25885377c6842c14c0361eeaa21499 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:16 ha-399583 dockerd[1187]: time="2025-09-29T13:19:16.868803553Z" level=info msg="ignoring event" container=6e602a051efa9808202ff5e0a632206364d9f55dc499d3b6560233b6b121e69c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:17 ha-399583 dockerd[1187]: time="2025-09-29T13:19:17.303396457Z" level=info msg="ignoring event" container=e1ae11a45d2ff19e6c97670cfafd46212633ec26395d6693473ad110b077e269 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6c2aec96a1a19b5b0a1ac112841a4e3b12f107c874d56c4cd9ffa6e933696aa0/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:17 ha-399583 dockerd[1187]: time="2025-09-29T13:19:17.521948603Z" level=info msg="ignoring event" container=053eae7f968bc8920259052b979365028efdf5b6724575a3a95323877965773b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/43b7c0b16072c37f6e6d3559eb5698c9f76cb94808a04f73835d951122fee25b/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 29 13:19:18 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:18Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:18 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:18Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:18 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:18Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:30 ha-399583 dockerd[1187]: time="2025-09-29T13:19:30.781289164Z" level=info msg="ignoring event" container=8a4f891b2f49420456c0ac4f63dcbc4ff1b870b480314e84049f701543a1c1d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:31 ha-399583 dockerd[1187]: time="2025-09-29T13:19:31.051242222Z" level=info msg="ignoring event" container=6c2aec96a1a19b5b0a1ac112841a4e3b12f107c874d56c4cd9ffa6e933696aa0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:31 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04e39e63500da1f71b6d61f057b3f5efa816d85f61b552b6cb621d1e4243c7bd/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	Sep 29 13:19:32 ha-399583 dockerd[1187]: time="2025-09-29T13:19:32.275605500Z" level=info msg="ignoring event" container=8cb0f155a82909e58a5e4155b29fce1a39d252e1f58821b98c8595baea1a88bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:32 ha-399583 dockerd[1187]: time="2025-09-29T13:19:32.634371914Z" level=info msg="ignoring event" container=43b7c0b16072c37f6e6d3559eb5698c9f76cb94808a04f73835d951122fee25b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:32 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b842712a337e9e223871f9172d7e1a7055b557d1a0ebcd01d0811ba6e235565a/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 29 13:19:33 ha-399583 dockerd[1187]: time="2025-09-29T13:19:33.407253546Z" level=info msg="ignoring event" container=c27d8d57cfbf9403c8ac768b52321e99a3d55657784a667c457dfd2e153c2654 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:20:54 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:20:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf493a34c0ac0ff83676a8c800ef381c857b42a8fa909dc64a7ad5b55598d5b0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 29 13:20:56 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:20:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	70a9591aafb8b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   7 seconds ago        Running             busybox                   0                   bf493a34c0ac0       busybox-7b57f96db7-jwnlz
	42379890b9d92       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       1                   ee9c364d50701       storage-provisioner
	9715ed50002de       138784d87c9c5                                                                                         About a minute ago   Running             coredns                   2                   b842712a337e9       coredns-66bc5c9577-5dqqj
	d674f80b2f082       138784d87c9c5                                                                                         About a minute ago   Running             coredns                   2                   04e39e63500da       coredns-66bc5c9577-p6v89
	8cb0f155a8290       138784d87c9c5                                                                                         About a minute ago   Exited              coredns                   1                   43b7c0b16072c       coredns-66bc5c9577-5dqqj
	8a4f891b2f494       138784d87c9c5                                                                                         About a minute ago   Exited              coredns                   1                   6c2aec96a1a19       coredns-66bc5c9577-p6v89
	bb51f3ad1da69       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              About a minute ago   Running             kindnet-cni               0                   9218c0ec505c1       kindnet-552n5
	476e33049da20       6fc32d66c1411                                                                                         2 minutes ago        Running             kube-proxy                0                   c699a05b6ea5a       kube-proxy-s2d46
	c27d8d57cfbf9       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       0                   ee9c364d50701       storage-provisioner
	f10714b286d96       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago        Running             kube-vip                  0                   4f7d569139668       kube-vip-ha-399583
	8726a81976510       996be7e86d9b3                                                                                         2 minutes ago        Running             kube-controller-manager   0                   cff6c86576de9       kube-controller-manager-ha-399583
	32e3ec1309ec0       a1894772a478e                                                                                         2 minutes ago        Running             etcd                      0                   3c8775165fbaf       etcd-ha-399583
	59b02d97e1876       d291939e99406                                                                                         2 minutes ago        Running             kube-apiserver            0                   6200f7fcf684c       kube-apiserver-ha-399583
	e5057f638dbe7       a25f5ef9c34c3                                                                                         2 minutes ago        Running             kube-scheduler            0                   7c9691ea056e9       kube-scheduler-ha-399583
	
	
	==> coredns [8a4f891b2f49] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48610 - 25140 "HINFO IN 7493471838613022335.9023286310770280868. udp 57 false 512" - - 0 5.00995779s
	[ERROR] plugin/errors: 2 7493471838613022335.9023286310770280868. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:52860 - 10305 "HINFO IN 7493471838613022335.9023286310770280868. udp 57 false 512" - - 0 5.000098632s
	[ERROR] plugin/errors: 2 7493471838613022335.9023286310770280868. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [8cb0f155a829] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54798 - 15184 "HINFO IN 5192265244121682960.9157686456546179351. udp 57 false 512" - - 0 5.036302766s
	[ERROR] plugin/errors: 2 5192265244121682960.9157686456546179351. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:38140 - 2359 "HINFO IN 5192265244121682960.9157686456546179351. udp 57 false 512" - - 0 5.005272334s
	[ERROR] plugin/errors: 2 5192265244121682960.9157686456546179351. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9715ed50002d] <==
	[INFO] 10.244.1.2:43773 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.002164339s
	[INFO] 10.244.1.3:44676 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000127574s
	[INFO] 10.244.1.3:42927 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000100308s
	[INFO] 10.244.1.3:34109 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000136371s
	[INFO] 10.244.0.4:49358 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090512s
	[INFO] 10.244.1.2:36584 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003747176s
	[INFO] 10.244.1.2:58306 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00023329s
	[INFO] 10.244.1.2:42892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002171404s
	[INFO] 10.244.1.2:49531 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175485s
	[INFO] 10.244.1.3:50947 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001342678s
	[INFO] 10.244.1.3:53717 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210742s
	[INFO] 10.244.1.3:38752 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000193996s
	[INFO] 10.244.1.3:34232 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000246403s
	[INFO] 10.244.1.3:51880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189811s
	[INFO] 10.244.0.4:58129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014145s
	[INFO] 10.244.0.4:35718 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001338731s
	[INFO] 10.244.0.4:44633 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182828s
	[INFO] 10.244.0.4:37431 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133671s
	[INFO] 10.244.0.4:59341 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218119s
	[INFO] 10.244.1.2:35029 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150549s
	[INFO] 10.244.1.2:57163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068136s
	[INFO] 10.244.1.3:49247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145593s
	[INFO] 10.244.1.3:34936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187702s
	[INFO] 10.244.1.3:41595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098307s
	[INFO] 10.244.0.4:46052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120789s
	
	
	==> coredns [d674f80b2f08] <==
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43817 - 1332 "HINFO IN 2110927985003271130.2061164144012676697. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022618601s
	[INFO] 10.244.1.3:37613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236352s
	[INFO] 10.244.1.3:47369 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000851029s
	[INFO] 10.244.0.4:56432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184478s
	[INFO] 10.244.0.4:52692 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.019662893s
	[INFO] 10.244.0.4:34677 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000135s
	[INFO] 10.244.0.4:50968 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.001367646s
	[INFO] 10.244.1.2:35001 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106709s
	[INFO] 10.244.1.2:58996 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022781s
	[INFO] 10.244.1.2:52458 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145872s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169872s
	[INFO] 10.244.1.3:41156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116178s
	[INFO] 10.244.1.3:48480 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001146507s
	[INFO] 10.244.1.3:51760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223913s
	[INFO] 10.244.0.4:39844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001029451s
	[INFO] 10.244.0.4:37614 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000242891s
	[INFO] 10.244.0.4:45422 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125507s
	[INFO] 10.244.1.2:59758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161011s
	[INFO] 10.244.1.2:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170825s
	[INFO] 10.244.1.3:41909 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00019151s
	[INFO] 10.244.0.4:42587 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163891s
	[INFO] 10.244.0.4:58665 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167805s
	[INFO] 10.244.0.4:52810 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066758s
	
	
	==> describe nodes <==
	Name:               ha-399583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-399583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=ha-399583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_19_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:18:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-399583
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:21:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-399583
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cb22907f09f471eaac8169fc8a85b65
	  System UUID:                45d3b675-cd2b-4b39-985d-76e474d341de
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jwnlz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-66bc5c9577-5dqqj             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 coredns-66bc5c9577-p6v89             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m3s
	  kube-system                 etcd-ha-399583                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m5s
	  kube-system                 kindnet-552n5                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-ha-399583             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m7s
	  kube-system                 kube-controller-manager-ha-399583    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-proxy-s2d46                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-ha-399583             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-vip-ha-399583                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m                     kube-proxy       
	  Normal   NodeAllocatableEnforced  2m16s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m16s (x8 over 2m16s)  kubelet          Node ha-399583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m16s (x8 over 2m16s)  kubelet          Node ha-399583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m16s (x7 over 2m16s)  kubelet          Node ha-399583 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m5s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m5s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  2m5s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m5s                   kubelet          Node ha-399583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s                   kubelet          Node ha-399583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s                   kubelet          Node ha-399583 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m4s                   node-controller  Node ha-399583 event: Registered Node ha-399583 in Controller
	  Normal   RegisteredNode           83s                    node-controller  Node ha-399583 event: Registered Node ha-399583 in Controller
	  Normal   RegisteredNode           36s                    node-controller  Node ha-399583 event: Registered Node ha-399583 in Controller
	
	
	Name:               ha-399583-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-399583-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=ha-399583
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_29T13_19_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:19:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-399583-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:20:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-399583-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4d265f6f5254457a0d07afe9ec8f395
	  System UUID:                6be8d3fe-d7ac-4d4e-912a-855ffd6a8a5a
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8md6f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     busybox-7b57f96db7-92l4c                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 etcd-ha-399583-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         71s
	  kube-system                 kindnet-dst2d                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      77s
	  kube-system                 kube-apiserver-ha-399583-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-controller-manager-ha-399583-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-proxy-2cb75                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         77s
	  kube-system                 kube-scheduler-ha-399583-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 kube-vip-ha-399583-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        68s   kube-proxy       
	  Normal  RegisteredNode  74s   node-controller  Node ha-399583-m02 event: Registered Node ha-399583-m02 in Controller
	  Normal  RegisteredNode  73s   node-controller  Node ha-399583-m02 event: Registered Node ha-399583-m02 in Controller
	  Normal  RegisteredNode  36s   node-controller  Node ha-399583-m02 event: Registered Node ha-399583-m02 in Controller
	
	
	Name:               ha-399583-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-399583-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=ha-399583
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_29T13_20_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:20:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-399583-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:20:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-399583-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1276f3da2509469f932c5f388a8929fd
	  System UUID:                4d61a959-d6ea-41f7-aef8-195886039d6b
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2lt6z                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 etcd-ha-399583-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         27s
	  kube-system                 kindnet-kvb6m                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      34s
	  kube-system                 kube-apiserver-ha-399583-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-ha-399583-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-sntfr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-scheduler-ha-399583-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-vip-ha-399583-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        12s   kube-proxy       
	  Normal  RegisteredNode  34s   node-controller  Node ha-399583-m03 event: Registered Node ha-399583-m03 in Controller
	  Normal  RegisteredNode  33s   node-controller  Node ha-399583-m03 event: Registered Node ha-399583-m03 in Controller
	  Normal  RegisteredNode  31s   node-controller  Node ha-399583-m03 event: Registered Node ha-399583-m03 in Controller
	
	
	==> dmesg <==
	[Sep29 11:47] kauditd_printk_skb: 8 callbacks suppressed
	[Sep29 12:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [32e3ec1309ec] <==
	{"level":"warn","ts":"2025-09-29T13:20:17.703276Z","caller":"etcdhttp/peer.go:152","msg":"failed to promote a member","member-id":"db4645cfc6f218b6","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-09-29T13:20:17.829890Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:20:17.832689Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:20:18.045476Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"db4645cfc6f218b6","error":"failed to dial db4645cfc6f218b6 on stream Message (peer db4645cfc6f218b6 failed to find local node aec36adc501070cc)"}
	{"level":"warn","ts":"2025-09-29T13:20:18.079557Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.149922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55898","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:20:18.158517Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 13899027835773194409 15800393101374265526)"}
	{"level":"info","ts":"2025-09-29T13:20:18.158925Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.159083Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.189269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55902","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:20:18.355177Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"db4645cfc6f218b6","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-29T13:20:18.355218Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.355232Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.357904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55944","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:20:18.402996Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.420825Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.438694Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"db4645cfc6f218b6","error":"failed to write db4645cfc6f218b6 on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.4:46836: write: broken pipe)"}
	{"level":"warn","ts":"2025-09-29T13:20:18.438970Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.473639Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"db4645cfc6f218b6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-29T13:20:18.473899Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.474021Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:29.597236Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-29T13:20:30.896477Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-29T13:20:47.099116Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-29T13:20:47.674468Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"db4645cfc6f218b6","bytes":1507865,"size":"1.5 MB","took":"30.114274637s"}
	
	
	==> kernel <==
	 13:21:04 up  5:03,  0 users,  load average: 5.12, 2.65, 2.39
	Linux ha-399583 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [bb51f3ad1da6] <==
	I0929 13:20:17.711399       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:17.711436       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:27.719270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:27.719304       1 main.go:301] handling current node
	I0929 13:20:27.719321       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:27.719327       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:37.711568       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0929 13:20:37.711611       1 main.go:324] Node ha-399583-m03 has CIDR [10.244.2.0/24] 
	I0929 13:20:37.711863       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I0929 13:20:37.712006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:37.712161       1 main.go:301] handling current node
	I0929 13:20:37.712191       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:37.712203       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:47.715052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:47.715143       1 main.go:301] handling current node
	I0929 13:20:47.715192       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:47.715219       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:47.715438       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0929 13:20:47.715454       1 main.go:324] Node ha-399583-m03 has CIDR [10.244.2.0/24] 
	I0929 13:20:57.712653       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:57.712688       1 main.go:301] handling current node
	I0929 13:20:57.712703       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:57.712709       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:57.713202       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0929 13:20:57.713225       1 main.go:324] Node ha-399583-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [59b02d97e187] <==
	I0929 13:18:55.544647       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0929 13:18:55.553138       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 13:18:55.554568       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 13:18:55.559782       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 13:18:55.737596       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 13:18:59.304415       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 13:18:59.318058       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:18:59.331512       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:19:00.997223       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 13:19:01.599047       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:19:01.605021       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:19:01.692438       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0929 13:20:10.474768       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:15.697182       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 13:20:59.214609       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36270: use of closed network connection
	E0929 13:20:59.541054       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36296: use of closed network connection
	E0929 13:20:59.812197       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36316: use of closed network connection
	E0929 13:21:00.330399       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36334: use of closed network connection
	E0929 13:21:00.616085       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36346: use of closed network connection
	E0929 13:21:01.152353       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36390: use of closed network connection
	E0929 13:21:01.447946       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36402: use of closed network connection
	E0929 13:21:01.695403       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36416: use of closed network connection
	E0929 13:21:02.014893       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36432: use of closed network connection
	E0929 13:21:02.288801       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36450: use of closed network connection
	E0929 13:21:02.531347       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36468: use of closed network connection
	
	
	==> kube-controller-manager [8726a8197651] <==
	I0929 13:19:00.786478       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 13:19:00.789065       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 13:19:00.789132       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 13:19:00.789288       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 13:19:00.789309       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 13:19:00.789327       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 13:19:00.790846       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 13:19:00.791971       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:19:00.797206       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 13:19:00.799647       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 13:19:00.800698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:19:00.808548       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 13:19:00.808814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:19:00.816985       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 13:19:00.847055       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:19:00.847083       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:19:00.847092       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:19:47.677665       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-399583-m02\" does not exist"
	I0929 13:19:47.694520       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-399583-m02" podCIDRs=["10.244.1.0/24"]
	I0929 13:19:50.797654       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-399583-m02"
	E0929 13:20:29.604263       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-w87hh failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-w87hh\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0929 13:20:29.608056       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-w87hh failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-w87hh\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0929 13:20:30.300260       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-399583-m03\" does not exist"
	I0929 13:20:30.377623       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-399583-m03" podCIDRs=["10.244.2.0/24"]
	I0929 13:20:30.820218       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-399583-m03"
	
	
	==> kube-proxy [476e33049da2] <==
	I0929 13:19:03.199821       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:19:03.294226       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:19:03.394460       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:19:03.394502       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 13:19:03.394589       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:19:03.504967       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:19:03.505029       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:19:03.532787       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:19:03.533149       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:19:03.533166       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:19:03.534951       1 config.go:200] "Starting service config controller"
	I0929 13:19:03.534962       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:19:03.534997       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:19:03.535002       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:19:03.535014       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:19:03.535018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:19:03.539569       1 config.go:309] "Starting node config controller"
	I0929 13:19:03.539584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:19:03.539592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:19:03.640616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:19:03.640653       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:19:03.640699       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e5057f638dbe] <==
	E0929 13:18:54.475641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:18:54.475689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:18:54.475743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:18:54.475832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:18:54.475967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 13:18:54.476015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:18:54.476078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:18:54.481255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 13:18:54.481334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:18:54.481399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:18:54.481713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:18:54.486612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:18:54.486806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:18:54.488764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:18:55.283737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0929 13:18:57.358532       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0929 13:20:30.505311       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cpdlp\": pod kube-proxy-cpdlp is already assigned to node \"ha-399583-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cpdlp" node="ha-399583-m03"
	E0929 13:20:30.506253       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 9ba5e634-5db2-4592-98d3-cd8afa30cf47(kube-system/kube-proxy-cpdlp) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-cpdlp"
	E0929 13:20:30.506353       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cpdlp\": pod kube-proxy-cpdlp is already assigned to node \"ha-399583-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-cpdlp"
	I0929 13:20:30.507729       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cpdlp" node="ha-399583-m03"
	I0929 13:20:53.296078       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="163889eb-aeae-4a84-8222-859102d02ec1" pod="default/busybox-7b57f96db7-92l4c" assumedNode="ha-399583-m02" currentNode="ha-399583"
	E0929 13:20:53.352858       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-92l4c\": pod busybox-7b57f96db7-92l4c is already assigned to node \"ha-399583-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-92l4c" node="ha-399583"
	E0929 13:20:53.354238       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 163889eb-aeae-4a84-8222-859102d02ec1(default/busybox-7b57f96db7-92l4c) was assumed on ha-399583 but assigned to ha-399583-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-92l4c"
	E0929 13:20:53.354478       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-92l4c\": pod busybox-7b57f96db7-92l4c is already assigned to node \"ha-399583-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-92l4c"
	I0929 13:20:53.356051       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-92l4c" node="ha-399583-m02"
	
	
	==> kubelet <==
	Sep 29 13:19:01 ha-399583 kubelet[2463]: I0929 13:19:01.941401    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab-tmp\") pod \"storage-provisioner\" (UID: \"5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab\") " pod="kube-system/storage-provisioner"
	Sep 29 13:19:01 ha-399583 kubelet[2463]: I0929 13:19:01.942717    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhcwh\" (UniqueName: \"kubernetes.io/projected/5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab-kube-api-access-qhcwh\") pod \"storage-provisioner\" (UID: \"5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab\") " pod="kube-system/storage-provisioner"
	Sep 29 13:19:01 ha-399583 kubelet[2463]: I0929 13:19:01.966702    2463 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.046974    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f0fb99f-7e4a-493f-b70f-40f31bcab4d4-config-volume\") pod \"coredns-66bc5c9577-5dqqj\" (UID: \"8f0fb99f-7e4a-493f-b70f-40f31bcab4d4\") " pod="kube-system/coredns-66bc5c9577-5dqqj"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.047048    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnx2\" (UniqueName: \"kubernetes.io/projected/8f0fb99f-7e4a-493f-b70f-40f31bcab4d4-kube-api-access-klnx2\") pod \"coredns-66bc5c9577-5dqqj\" (UID: \"8f0fb99f-7e4a-493f-b70f-40f31bcab4d4\") " pod="kube-system/coredns-66bc5c9577-5dqqj"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.147537    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dba7282-54c9-4cf8-acd8-64548b982b4e-config-volume\") pod \"coredns-66bc5c9577-p6v89\" (UID: \"3dba7282-54c9-4cf8-acd8-64548b982b4e\") " pod="kube-system/coredns-66bc5c9577-p6v89"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.147614    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsw2\" (UniqueName: \"kubernetes.io/projected/3dba7282-54c9-4cf8-acd8-64548b982b4e-kube-api-access-5gsw2\") pod \"coredns-66bc5c9577-p6v89\" (UID: \"3dba7282-54c9-4cf8-acd8-64548b982b4e\") " pod="kube-system/coredns-66bc5c9577-p6v89"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.600210    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9218c0ec505c1057217ae4b2feb723de8d7840bad6ef2c8e380e65980791b749"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.611142    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c699a05b6ea5ac4476f9801aec167be528e8c574c0bba66e213422d433e1dfb5"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.661525    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee9c364d5070171840acea461541085340472795fb2ec67d14b11b3ffe769fed"
	Sep 29 13:19:03 ha-399583 kubelet[2463]: I0929 13:19:03.912260    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.912239149 podStartE2EDuration="2.912239149s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:03.911960943 +0000 UTC m=+4.815601881" watchObservedRunningTime="2025-09-29 13:19:03.912239149 +0000 UTC m=+4.815880079"
	Sep 29 13:19:04 ha-399583 kubelet[2463]: I0929 13:19:04.291473    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p6v89" podStartSLOduration=3.291451462 podStartE2EDuration="3.291451462s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:04.145831901 +0000 UTC m=+5.049472831" watchObservedRunningTime="2025-09-29 13:19:04.291451462 +0000 UTC m=+5.195092392"
	Sep 29 13:19:04 ha-399583 kubelet[2463]: I0929 13:19:04.958480    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s2d46" podStartSLOduration=3.958448969 podStartE2EDuration="3.958448969s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:04.292620385 +0000 UTC m=+5.196261324" watchObservedRunningTime="2025-09-29 13:19:04.958448969 +0000 UTC m=+5.862089899"
	Sep 29 13:19:05 ha-399583 kubelet[2463]: I0929 13:19:05.415830    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5dqqj" podStartSLOduration=4.415802841 podStartE2EDuration="4.415802841s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:05.381612865 +0000 UTC m=+6.285253812" watchObservedRunningTime="2025-09-29 13:19:05.415802841 +0000 UTC m=+6.319443763"
	Sep 29 13:19:08 ha-399583 kubelet[2463]: I0929 13:19:08.458326    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-552n5" podStartSLOduration=3.736339255 podStartE2EDuration="7.458306005s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="2025-09-29 13:19:02.612284284 +0000 UTC m=+3.515925214" lastFinishedPulling="2025-09-29 13:19:06.334251034 +0000 UTC m=+7.237891964" observedRunningTime="2025-09-29 13:19:08.456921826 +0000 UTC m=+9.360562756" watchObservedRunningTime="2025-09-29 13:19:08.458306005 +0000 UTC m=+9.361946935"
	Sep 29 13:19:09 ha-399583 kubelet[2463]: I0929 13:19:09.612760    2463 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 13:19:09 ha-399583 kubelet[2463]: I0929 13:19:09.617819    2463 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 29 13:19:17 ha-399583 kubelet[2463]: I0929 13:19:17.682336    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e602a051efa9808202ff5e0a632206364d9f55dc499d3b6560233b6b121e69c"
	Sep 29 13:19:18 ha-399583 kubelet[2463]: I0929 13:19:18.734101    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053eae7f968bc8920259052b979365028efdf5b6724575a3a95323877965773b"
	Sep 29 13:19:31 ha-399583 kubelet[2463]: I0929 13:19:31.279473    2463 scope.go:117] "RemoveContainer" containerID="f9a485d796f1697bb95b77b506d6d7d33a25885377c6842c14c0361eeaa21499"
	Sep 29 13:19:32 ha-399583 kubelet[2463]: I0929 13:19:32.418725    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c2aec96a1a19b5b0a1ac112841a4e3b12f107c874d56c4cd9ffa6e933696aa0"
	Sep 29 13:19:33 ha-399583 kubelet[2463]: I0929 13:19:33.476964    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b7c0b16072c37f6e6d3559eb5698c9f76cb94808a04f73835d951122fee25b"
	Sep 29 13:19:33 ha-399583 kubelet[2463]: I0929 13:19:33.477022    2463 scope.go:117] "RemoveContainer" containerID="e1ae11a45d2ff19e6c97670cfafd46212633ec26395d6693473ad110b077e269"
	Sep 29 13:19:34 ha-399583 kubelet[2463]: I0929 13:19:34.511475    2463 scope.go:117] "RemoveContainer" containerID="c27d8d57cfbf9403c8ac768b52321e99a3d55657784a667c457dfd2e153c2654"
	Sep 29 13:20:53 ha-399583 kubelet[2463]: I0929 13:20:53.531890    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrvl\" (UniqueName: \"kubernetes.io/projected/32a441ef-2e7d-4ea5-9e66-94d19d0b14be-kube-api-access-mkrvl\") pod \"busybox-7b57f96db7-jwnlz\" (UID: \"32a441ef-2e7d-4ea5-9e66-94d19d0b14be\") " pod="default/busybox-7b57f96db7-jwnlz"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-399583 -n ha-399583
helpers_test.go:269: (dbg) Run:  kubectl --context ha-399583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (12.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:214: minikube host ip is nil: 
** stderr ** 
	nslookup: can't resolve 'host.minikube.internal'

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-399583
helpers_test.go:243: (dbg) docker inspect ha-399583:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0",
	        "Created": "2025-09-29T13:18:30.192674344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1175337,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T13:18:30.249493703Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/hostname",
	        "HostsPath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/hosts",
	        "LogPath": "/var/lib/docker/containers/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0-json.log",
	        "Name": "/ha-399583",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-399583:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "ha-399583",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0",
	                "LowerDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f0822d0b552f9e4e2efeccb3b2b40c10abb4291265f6a6cb22e145e8a4a4e4a1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-399583",
	                "Source": "/var/lib/docker/volumes/ha-399583/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-399583",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-399583",
	                "name.minikube.sigs.k8s.io": "ha-399583",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "93c434fd6d9a32fd353d4c5388bdbf4bc9ebfdd2f75c7ea365d882b05b65a187",
	            "SandboxKey": "/var/run/docker/netns/93c434fd6d9a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33938"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33939"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33942"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33940"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33941"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-399583": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "62:05:70:ec:5f:75",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "85cc826cc833d1082aa1a3789e79bbf0a30c36137b1e336517db46ba97d3357c",
	                    "EndpointID": "6885f8b403088835e27b130473eb4cf9ec77d0dfd6bf48e4f1c2d359f5836ab8",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-399583",
	                        "4ff0a10009db"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p ha-399583 -n ha-399583
helpers_test.go:252: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 logs -n 25: (1.571367176s)
helpers_test.go:260: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-085003 ssh pgrep buildkitd                                                                                     │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │                     │
	│ image   │ functional-085003 image build -t localhost/my-image:functional-085003 testdata/build --alsologtostderr                    │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image   │ functional-085003 image ls                                                                                                │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image   │ functional-085003 image ls --format json --alsologtostderr                                                                │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ image   │ functional-085003 image ls --format table --alsologtostderr                                                               │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:13 UTC │ 29 Sep 25 13:13 UTC │
	│ delete  │ -p functional-085003                                                                                                      │ functional-085003 │ jenkins │ v1.37.0 │ 29 Sep 25 13:18 UTC │ 29 Sep 25 13:18 UTC │
	│ start   │ ha-399583 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker         │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:18 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- rollout status deployment/busybox                                                                    │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.io                                              │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │                     │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.io                                              │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.io                                              │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:20 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.io                                              │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:20 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default                                         │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.default                                         │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.default                                         │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.default                                         │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- nslookup kubernetes.default.svc.cluster.local                       │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │                     │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-8md6f -- nslookup kubernetes.default.svc.cluster.local                       │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-92l4c -- nslookup kubernetes.default.svc.cluster.local                       │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-jwnlz -- nslookup kubernetes.default.svc.cluster.local                       │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	│ kubectl │ ha-399583 kubectl -- exec busybox-7b57f96db7-2lt6z -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-399583         │ jenkins │ v1.37.0 │ 29 Sep 25 13:21 UTC │ 29 Sep 25 13:21 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:18:25
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:18:25.325660 1174954 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:18:25.325856 1174954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:18:25.325888 1174954 out.go:374] Setting ErrFile to fd 2...
	I0929 13:18:25.325911 1174954 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:18:25.326183 1174954 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:18:25.326627 1174954 out.go:368] Setting JSON to false
	I0929 13:18:25.327555 1174954 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":18058,"bootTime":1759133848,"procs":152,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 13:18:25.327654 1174954 start.go:140] virtualization:  
	I0929 13:18:25.331392 1174954 out.go:179] * [ha-399583] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 13:18:25.335709 1174954 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:18:25.335907 1174954 notify.go:220] Checking for updates...
	I0929 13:18:25.342060 1174954 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:18:25.345287 1174954 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:18:25.348296 1174954 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 13:18:25.351485 1174954 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 13:18:25.354476 1174954 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:18:25.357728 1174954 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:18:25.390086 1174954 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 13:18:25.390212 1174954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:18:25.451517 1174954 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-29 13:18:25.44201949 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Pa
th:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:18:25.451633 1174954 docker.go:318] overlay module found
	I0929 13:18:25.454860 1174954 out.go:179] * Using the docker driver based on user configuration
	I0929 13:18:25.457805 1174954 start.go:304] selected driver: docker
	I0929 13:18:25.457828 1174954 start.go:924] validating driver "docker" against <nil>
	I0929 13:18:25.457843 1174954 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:18:25.458546 1174954 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:18:25.528041 1174954 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:true NGoroutines:42 SystemTime:2025-09-29 13:18:25.519102084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:18:25.528194 1174954 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 13:18:25.528429 1174954 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:18:25.531456 1174954 out.go:179] * Using Docker driver with root privileges
	I0929 13:18:25.534332 1174954 cni.go:84] Creating CNI manager for ""
	I0929 13:18:25.534410 1174954 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0929 13:18:25.534423 1174954 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0929 13:18:25.534514 1174954 start.go:348] cluster config:
	{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin
:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:18:25.537689 1174954 out.go:179] * Starting "ha-399583" primary control-plane node in "ha-399583" cluster
	I0929 13:18:25.540683 1174954 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:18:25.543629 1174954 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:18:25.546597 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:18:25.546663 1174954 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 13:18:25.546694 1174954 cache.go:58] Caching tarball of preloaded images
	I0929 13:18:25.546692 1174954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:18:25.546795 1174954 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 13:18:25.546806 1174954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 13:18:25.547172 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:18:25.547202 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json: {Name:mkae797a6658ba3b436ea5ee875282b75c92e17a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:25.565926 1174954 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:18:25.565953 1174954 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:18:25.565967 1174954 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:18:25.565991 1174954 start.go:360] acquireMachinesLock for ha-399583: {Name:mk6a93adabf6340a9742e1fe127a7da8b14537cd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:18:25.566094 1174954 start.go:364] duration metric: took 87µs to acquireMachinesLock for "ha-399583"
	I0929 13:18:25.566126 1174954 start.go:93] Provisioning new machine with config: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:18:25.566199 1174954 start.go:125] createHost starting for "" (driver="docker")
	I0929 13:18:25.569651 1174954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:18:25.569898 1174954 start.go:159] libmachine.API.Create for "ha-399583" (driver="docker")
	I0929 13:18:25.569936 1174954 client.go:168] LocalClient.Create starting
	I0929 13:18:25.570028 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 13:18:25.570066 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:18:25.570084 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:18:25.570150 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 13:18:25.570170 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:18:25.570183 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:18:25.570548 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 13:18:25.586603 1174954 cli_runner.go:211] docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 13:18:25.586701 1174954 network_create.go:284] running [docker network inspect ha-399583] to gather additional debugging logs...
	I0929 13:18:25.586722 1174954 cli_runner.go:164] Run: docker network inspect ha-399583
	W0929 13:18:25.602505 1174954 cli_runner.go:211] docker network inspect ha-399583 returned with exit code 1
	I0929 13:18:25.602536 1174954 network_create.go:287] error running [docker network inspect ha-399583]: docker network inspect ha-399583: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-399583 not found
	I0929 13:18:25.602550 1174954 network_create.go:289] output of [docker network inspect ha-399583]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-399583 not found
	
	** /stderr **
	I0929 13:18:25.602658 1174954 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:18:25.618134 1174954 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40017e6c80}
	I0929 13:18:25.618170 1174954 network_create.go:124] attempt to create docker network ha-399583 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0929 13:18:25.618223 1174954 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-399583 ha-399583
	I0929 13:18:25.670767 1174954 network_create.go:108] docker network ha-399583 192.168.49.0/24 created
	I0929 13:18:25.670801 1174954 kic.go:121] calculated static IP "192.168.49.2" for the "ha-399583" container
	I0929 13:18:25.670875 1174954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:18:25.686104 1174954 cli_runner.go:164] Run: docker volume create ha-399583 --label name.minikube.sigs.k8s.io=ha-399583 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:18:25.703494 1174954 oci.go:103] Successfully created a docker volume ha-399583
	I0929 13:18:25.703602 1174954 cli_runner.go:164] Run: docker run --rm --name ha-399583-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583 --entrypoint /usr/bin/test -v ha-399583:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:18:26.253990 1174954 oci.go:107] Successfully prepared a docker volume ha-399583
	I0929 13:18:26.254053 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:18:26.254085 1174954 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:18:26.254161 1174954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:18:30.123296 1174954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.869094796s)
	I0929 13:18:30.123334 1174954 kic.go:203] duration metric: took 3.869254742s to extract preloaded images to volume ...
	W0929 13:18:30.123495 1174954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 13:18:30.123608 1174954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:18:30.176759 1174954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-399583 --name ha-399583 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-399583 --network ha-399583 --ip 192.168.49.2 --volume ha-399583:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:18:30.460613 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Running}}
	I0929 13:18:30.492877 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:18:30.519153 1174954 cli_runner.go:164] Run: docker exec ha-399583 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:18:30.569683 1174954 oci.go:144] the created container "ha-399583" has a running status.
	I0929 13:18:30.569719 1174954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa...
	I0929 13:18:30.855932 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0929 13:18:30.856058 1174954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:18:30.877650 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:18:30.898680 1174954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:18:30.898699 1174954 kic_runner.go:114] Args: [docker exec --privileged ha-399583 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:18:30.961194 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:18:30.996709 1174954 machine.go:93] provisionDockerMachine start ...
	I0929 13:18:30.996801 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:31.036817 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:31.037146 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:31.037162 1174954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:18:31.037867 1174954 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:59688->127.0.0.1:33938: read: connection reset by peer
	I0929 13:18:34.175845 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583
	
	I0929 13:18:34.175867 1174954 ubuntu.go:182] provisioning hostname "ha-399583"
	I0929 13:18:34.175962 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:34.193970 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:34.194295 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:34.194311 1174954 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-399583 && echo "ha-399583" | sudo tee /etc/hostname
	I0929 13:18:34.344270 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583
	
	I0929 13:18:34.344370 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:34.361798 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:34.362119 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:34.362142 1174954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-399583' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-399583/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-399583' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:18:34.500384 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:18:34.500414 1174954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 13:18:34.500433 1174954 ubuntu.go:190] setting up certificates
	I0929 13:18:34.500487 1174954 provision.go:84] configureAuth start
	I0929 13:18:34.500574 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583
	I0929 13:18:34.517650 1174954 provision.go:143] copyHostCerts
	I0929 13:18:34.517695 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:18:34.517730 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 13:18:34.517742 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:18:34.517851 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 13:18:34.517942 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:18:34.517965 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 13:18:34.517976 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:18:34.518003 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 13:18:34.518049 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:18:34.518069 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 13:18:34.518078 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:18:34.518102 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 13:18:34.518153 1174954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.ha-399583 san=[127.0.0.1 192.168.49.2 ha-399583 localhost minikube]
	I0929 13:18:35.154273 1174954 provision.go:177] copyRemoteCerts
	I0929 13:18:35.154354 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:18:35.154396 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.175285 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:35.273188 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0929 13:18:35.273256 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:18:35.297804 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0929 13:18:35.297864 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0929 13:18:35.322638 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0929 13:18:35.322704 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:18:35.347450 1174954 provision.go:87] duration metric: took 846.935387ms to configureAuth
	I0929 13:18:35.347480 1174954 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:18:35.347731 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:18:35.347798 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.365115 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:35.365432 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:35.365448 1174954 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 13:18:35.505044 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 13:18:35.505065 1174954 ubuntu.go:71] root file system type: overlay
	I0929 13:18:35.505178 1174954 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 13:18:35.505240 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.522915 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:35.523214 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:35.523296 1174954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 13:18:35.677571 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 13:18:35.677698 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:35.695403 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:18:35.695708 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33938 <nil> <nil>}
	I0929 13:18:35.695730 1174954 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 13:18:36.529221 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 13:18:35.670941125 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 13:18:36.529300 1174954 machine.go:96] duration metric: took 5.532566432s to provisionDockerMachine
	I0929 13:18:36.529327 1174954 client.go:171] duration metric: took 10.959380481s to LocalClient.Create
	I0929 13:18:36.529395 1174954 start.go:167] duration metric: took 10.959483827s to libmachine.API.Create "ha-399583"
	I0929 13:18:36.529434 1174954 start.go:293] postStartSetup for "ha-399583" (driver="docker")
	I0929 13:18:36.529459 1174954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:18:36.529556 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:18:36.529638 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.554644 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:36.653460 1174954 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:18:36.656534 1174954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:18:36.656571 1174954 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:18:36.656581 1174954 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:18:36.656588 1174954 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:18:36.656598 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 13:18:36.656655 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 13:18:36.656743 1174954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 13:18:36.656755 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /etc/ssl/certs/11276402.pem
	I0929 13:18:36.656864 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:18:36.665215 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:18:36.689070 1174954 start.go:296] duration metric: took 159.606314ms for postStartSetup
	I0929 13:18:36.689535 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583
	I0929 13:18:36.706663 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:18:36.706952 1174954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:18:36.707015 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.723615 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:36.821307 1174954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:18:36.825898 1174954 start.go:128] duration metric: took 11.25968165s to createHost
	I0929 13:18:36.825921 1174954 start.go:83] releasing machines lock for "ha-399583", held for 11.259812623s
	I0929 13:18:36.825994 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583
	I0929 13:18:36.847632 1174954 ssh_runner.go:195] Run: cat /version.json
	I0929 13:18:36.847697 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.847948 1174954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:18:36.848012 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:18:36.866297 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:36.868804 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:18:37.090836 1174954 ssh_runner.go:195] Run: systemctl --version
	I0929 13:18:37.095122 1174954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:18:37.099451 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:18:37.125013 1174954 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:18:37.125093 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:18:37.155198 1174954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:18:37.155225 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:18:37.155260 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:18:37.155359 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:18:37.171885 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:18:37.181883 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:18:37.191626 1174954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 13:18:37.191691 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 13:18:37.201359 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:18:37.211710 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:18:37.222842 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:18:37.232813 1174954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:18:37.242409 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:18:37.252058 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:18:37.261972 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:18:37.271400 1174954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:18:37.279991 1174954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:18:37.288465 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:37.371632 1174954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:18:37.458610 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:18:37.458712 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:18:37.458782 1174954 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 13:18:37.471372 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:18:37.483533 1174954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:18:37.506680 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:18:37.518759 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:18:37.531288 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:18:37.548173 1174954 ssh_runner.go:195] Run: which cri-dockerd
	I0929 13:18:37.551762 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 13:18:37.560848 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 13:18:37.579206 1174954 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 13:18:37.671443 1174954 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 13:18:37.763836 1174954 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 13:18:37.764018 1174954 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 13:18:37.783762 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 13:18:37.796204 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:37.889023 1174954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 13:18:38.285326 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:18:38.297127 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 13:18:38.309301 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:18:38.321343 1174954 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 13:18:38.415073 1174954 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 13:18:38.506484 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:38.587249 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 13:18:38.601893 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 13:18:38.613784 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:38.709996 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 13:18:38.779650 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:18:38.792850 1174954 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 13:18:38.792919 1174954 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 13:18:38.796394 1174954 start.go:563] Will wait 60s for crictl version
	I0929 13:18:38.796457 1174954 ssh_runner.go:195] Run: which crictl
	I0929 13:18:38.800012 1174954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:18:38.840429 1174954 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 13:18:38.840596 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:18:38.863675 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:18:38.892228 1174954 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 13:18:38.892348 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:18:38.908433 1174954 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 13:18:38.912222 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:18:38.923132 1174954 kubeadm.go:875] updating cluster {Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 13:18:38.923255 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:18:38.923319 1174954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 13:18:38.942060 1174954 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 13:18:38.942085 1174954 docker.go:621] Images already preloaded, skipping extraction
	I0929 13:18:38.942148 1174954 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 13:18:38.961435 1174954 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 13:18:38.961461 1174954 cache_images.go:85] Images are preloaded, skipping loading
	I0929 13:18:38.961471 1174954 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 docker true true} ...
	I0929 13:18:38.961561 1174954 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-399583 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:18:38.961640 1174954 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 13:18:39.017894 1174954 cni.go:84] Creating CNI manager for ""
	I0929 13:18:39.017916 1174954 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0929 13:18:39.017927 1174954 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 13:18:39.017951 1174954 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-399583 NodeName:ha-399583 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 13:18:39.018079 1174954 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-399583"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 13:18:39.018100 1174954 kube-vip.go:115] generating kube-vip config ...
	I0929 13:18:39.018158 1174954 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0929 13:18:39.031327 1174954 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:18:39.031430 1174954 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0929 13:18:39.031495 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:18:39.040417 1174954 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:18:39.040488 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0929 13:18:39.049491 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0929 13:18:39.067606 1174954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:18:39.086175 1174954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2210 bytes)
	I0929 13:18:39.104681 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0929 13:18:39.122790 1174954 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0929 13:18:39.126235 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:18:39.136919 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:18:39.226079 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:18:39.242061 1174954 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583 for IP: 192.168.49.2
	I0929 13:18:39.242094 1174954 certs.go:194] generating shared ca certs ...
	I0929 13:18:39.242110 1174954 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:39.242316 1174954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 13:18:39.242378 1174954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 13:18:39.242392 1174954 certs.go:256] generating profile certs ...
	I0929 13:18:39.242466 1174954 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key
	I0929 13:18:39.242485 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt with IP's: []
	I0929 13:18:39.957115 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt ...
	I0929 13:18:39.957148 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt: {Name:mk1d73907125fade7f91d0fe8012be0fdd8c8d6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:39.957386 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key ...
	I0929 13:18:39.957402 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key: {Name:mk7e3fe444e6167839184499439714d7a7842523 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:39.957500 1174954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115
	I0929 13:18:39.957518 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0929 13:18:40.191674 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115 ...
	I0929 13:18:40.191712 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115: {Name:mkd4d8e4bece92b6c9105bc5a6d7f51e2f611f2b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.191913 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115 ...
	I0929 13:18:40.191928 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115: {Name:mk208156a7f3fea25f75539b023d4edfe837050e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.192026 1174954 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.34dec115 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt
	I0929 13:18:40.192111 1174954 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.34dec115 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key
	I0929 13:18:40.192172 1174954 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key
	I0929 13:18:40.192193 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt with IP's: []
	I0929 13:18:40.468298 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt ...
	I0929 13:18:40.468331 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt: {Name:mk35b7db6803c80f90ba766bd1daace4cc8b3e5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.468539 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key ...
	I0929 13:18:40.468554 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key: {Name:mk25bd552775c6992e7bb37dd60dfd938facc3eb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:18:40.468640 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0929 13:18:40.468662 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0929 13:18:40.468675 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0929 13:18:40.468691 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0929 13:18:40.468704 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0929 13:18:40.468720 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0929 13:18:40.468731 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0929 13:18:40.468749 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0929 13:18:40.468802 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 13:18:40.468844 1174954 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 13:18:40.468858 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:18:40.468883 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:18:40.468916 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:18:40.468942 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 13:18:40.468998 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:18:40.469031 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.469047 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.469063 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem -> /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.469689 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:18:40.495407 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:18:40.519592 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:18:40.544498 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:18:40.569218 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 13:18:40.593655 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:18:40.618016 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:18:40.642675 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:18:40.666400 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 13:18:40.691296 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:18:40.716474 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 13:18:40.741528 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 13:18:40.759513 1174954 ssh_runner.go:195] Run: openssl version
	I0929 13:18:40.765304 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 13:18:40.774825 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.778379 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.778495 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 13:18:40.785766 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 13:18:40.795303 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 13:18:40.805885 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.809909 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.809976 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 13:18:40.817495 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:18:40.827153 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:18:40.839099 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.843069 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.843150 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:18:40.850770 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:18:40.863600 1174954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:18:40.866904 1174954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:18:40.866957 1174954 kubeadm.go:392] StartCluster: {Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: So
cketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:18:40.867089 1174954 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 13:18:40.884822 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 13:18:40.893826 1174954 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 13:18:40.902745 1174954 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 13:18:40.902829 1174954 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 13:18:40.911734 1174954 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 13:18:40.911800 1174954 kubeadm.go:157] found existing configuration files:
	
	I0929 13:18:40.911867 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 13:18:40.921241 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 13:18:40.921326 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 13:18:40.930129 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 13:18:40.939251 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 13:18:40.939329 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 13:18:40.947772 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 13:18:40.956866 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 13:18:40.956931 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 13:18:40.965577 1174954 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 13:18:40.974666 1174954 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 13:18:40.974738 1174954 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 13:18:40.983386 1174954 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 13:18:41.029892 1174954 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 13:18:41.030142 1174954 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 13:18:41.051235 1174954 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 13:18:41.051410 1174954 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 13:18:41.051488 1174954 kubeadm.go:310] OS: Linux
	I0929 13:18:41.051573 1174954 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 13:18:41.051649 1174954 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 13:18:41.051731 1174954 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 13:18:41.051818 1174954 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 13:18:41.051899 1174954 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 13:18:41.052026 1174954 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 13:18:41.052116 1174954 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 13:18:41.052191 1174954 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 13:18:41.052274 1174954 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 13:18:41.111979 1174954 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 13:18:41.112139 1174954 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 13:18:41.112240 1174954 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 13:18:41.128166 1174954 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 13:18:41.134243 1174954 out.go:252]   - Generating certificates and keys ...
	I0929 13:18:41.134350 1174954 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 13:18:41.134424 1174954 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 13:18:41.453292 1174954 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 13:18:42.122861 1174954 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 13:18:42.827036 1174954 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 13:18:43.073920 1174954 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 13:18:43.351514 1174954 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 13:18:43.351819 1174954 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-399583 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 13:18:43.533844 1174954 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 13:18:43.534171 1174954 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-399583 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0929 13:18:44.006192 1174954 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 13:18:44.750617 1174954 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 13:18:45.548351 1174954 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 13:18:45.548663 1174954 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 13:18:46.293239 1174954 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 13:18:46.349784 1174954 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 13:18:46.488964 1174954 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 13:18:47.153378 1174954 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 13:18:48.135742 1174954 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 13:18:48.136561 1174954 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 13:18:48.139329 1174954 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 13:18:48.142720 1174954 out.go:252]   - Booting up control plane ...
	I0929 13:18:48.142849 1174954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 13:18:48.142937 1174954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 13:18:48.143354 1174954 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 13:18:48.155872 1174954 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 13:18:48.156214 1174954 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 13:18:48.163579 1174954 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 13:18:48.164089 1174954 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 13:18:48.164366 1174954 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 13:18:48.258843 1174954 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 13:18:48.258985 1174954 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 13:18:49.256853 1174954 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.00160354s
	I0929 13:18:49.259541 1174954 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 13:18:49.259640 1174954 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0929 13:18:49.259999 1174954 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 13:18:49.260110 1174954 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 13:18:53.286542 1174954 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 4.026540969s
	I0929 13:18:54.472772 1174954 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.213197139s
	I0929 13:18:58.487580 1174954 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 9.227933141s
	I0929 13:18:58.507229 1174954 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 13:18:58.522665 1174954 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 13:18:58.537089 1174954 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 13:18:58.537318 1174954 kubeadm.go:310] [mark-control-plane] Marking the node ha-399583 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 13:18:58.552394 1174954 kubeadm.go:310] [bootstrap-token] Using token: b3fy01.4kp1xgsz2v3o318m
	I0929 13:18:58.555478 1174954 out.go:252]   - Configuring RBAC rules ...
	I0929 13:18:58.555616 1174954 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 13:18:58.560478 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 13:18:58.570959 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 13:18:58.575207 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 13:18:58.579447 1174954 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 13:18:58.583815 1174954 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 13:18:58.894018 1174954 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 13:18:59.319477 1174954 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 13:18:59.895066 1174954 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 13:18:59.896410 1174954 kubeadm.go:310] 
	I0929 13:18:59.896498 1174954 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 13:18:59.896537 1174954 kubeadm.go:310] 
	I0929 13:18:59.896623 1174954 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 13:18:59.896632 1174954 kubeadm.go:310] 
	I0929 13:18:59.896659 1174954 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 13:18:59.896725 1174954 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 13:18:59.896784 1174954 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 13:18:59.896793 1174954 kubeadm.go:310] 
	I0929 13:18:59.896856 1174954 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 13:18:59.896865 1174954 kubeadm.go:310] 
	I0929 13:18:59.896915 1174954 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 13:18:59.896923 1174954 kubeadm.go:310] 
	I0929 13:18:59.896979 1174954 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 13:18:59.897061 1174954 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 13:18:59.897138 1174954 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 13:18:59.897146 1174954 kubeadm.go:310] 
	I0929 13:18:59.897234 1174954 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 13:18:59.897318 1174954 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 13:18:59.897326 1174954 kubeadm.go:310] 
	I0929 13:18:59.897414 1174954 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token b3fy01.4kp1xgsz2v3o318m \
	I0929 13:18:59.897526 1174954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b \
	I0929 13:18:59.897552 1174954 kubeadm.go:310] 	--control-plane 
	I0929 13:18:59.897560 1174954 kubeadm.go:310] 
	I0929 13:18:59.897649 1174954 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 13:18:59.897657 1174954 kubeadm.go:310] 
	I0929 13:18:59.897743 1174954 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token b3fy01.4kp1xgsz2v3o318m \
	I0929 13:18:59.897853 1174954 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b 
	I0929 13:18:59.902447 1174954 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 13:18:59.902700 1174954 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 13:18:59.902817 1174954 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 13:18:59.902900 1174954 cni.go:84] Creating CNI manager for ""
	I0929 13:18:59.902949 1174954 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0929 13:18:59.907871 1174954 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0929 13:18:59.910633 1174954 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0929 13:18:59.914720 1174954 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 13:18:59.914744 1174954 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0929 13:18:59.936309 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 13:19:00.547165 1174954 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 13:19:00.547246 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:19:00.547317 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-399583 minikube.k8s.io/updated_at=2025_09_29T13_19_00_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=ha-399583 minikube.k8s.io/primary=true
	I0929 13:19:00.800539 1174954 ops.go:34] apiserver oom_adj: -16
	I0929 13:19:00.800650 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 13:19:00.918525 1174954 kubeadm.go:1105] duration metric: took 371.351132ms to wait for elevateKubeSystemPrivileges
	I0929 13:19:00.918560 1174954 kubeadm.go:394] duration metric: took 20.051606228s to StartCluster
	I0929 13:19:00.918586 1174954 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:00.918674 1174954 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:19:00.919336 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:00.919578 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 13:19:00.919613 1174954 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 13:19:00.919681 1174954 addons.go:69] Setting storage-provisioner=true in profile "ha-399583"
	I0929 13:19:00.919695 1174954 addons.go:238] Setting addon storage-provisioner=true in "ha-399583"
	I0929 13:19:00.919594 1174954 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:00.919741 1174954 start.go:241] waiting for startup goroutines ...
	I0929 13:19:00.919718 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:00.919885 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:00.919922 1174954 addons.go:69] Setting default-storageclass=true in profile "ha-399583"
	I0929 13:19:00.919944 1174954 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-399583"
	I0929 13:19:00.920349 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:00.920350 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:00.968615 1174954 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 13:19:00.971101 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 13:19:00.971678 1174954 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0929 13:19:00.971701 1174954 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0929 13:19:00.971706 1174954 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0929 13:19:00.971712 1174954 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0929 13:19:00.971716 1174954 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0929 13:19:00.971984 1174954 addons.go:238] Setting addon default-storageclass=true in "ha-399583"
	I0929 13:19:00.972022 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:00.972362 1174954 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:19:00.972381 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 13:19:00.972437 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:00.972811 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:00.973251 1174954 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0929 13:19:00.998874 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:01.017083 1174954 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 13:19:01.017107 1174954 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 13:19:01.017167 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:01.044762 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:01.151607 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 13:19:01.192885 1174954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 13:19:01.201000 1174954 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 13:19:01.563960 1174954 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0929 13:19:01.885736 1174954 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 13:19:01.888798 1174954 addons.go:514] duration metric: took 969.143808ms for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 13:19:01.888892 1174954 start.go:246] waiting for cluster config update ...
	I0929 13:19:01.888953 1174954 start.go:255] writing updated cluster config ...
	I0929 13:19:01.891339 1174954 out.go:203] 
	I0929 13:19:01.894679 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:01.894835 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:01.898491 1174954 out.go:179] * Starting "ha-399583-m02" control-plane node in "ha-399583" cluster
	I0929 13:19:01.901483 1174954 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:19:01.904643 1174954 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:19:01.907503 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:01.907660 1174954 cache.go:58] Caching tarball of preloaded images
	I0929 13:19:01.907606 1174954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:19:01.908015 1174954 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 13:19:01.908053 1174954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 13:19:01.908215 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:01.930613 1174954 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:19:01.930643 1174954 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:19:01.930657 1174954 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:19:01.930682 1174954 start.go:360] acquireMachinesLock for ha-399583-m02: {Name:mkc66e87512662de4b81d9ad77cee2a1bd85fc84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:19:01.930803 1174954 start.go:364] duration metric: took 96.69µs to acquireMachinesLock for "ha-399583-m02"
	I0929 13:19:01.930836 1174954 start.go:93] Provisioning new machine with config: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:01.930915 1174954 start.go:125] createHost starting for "m02" (driver="docker")
	I0929 13:19:01.936288 1174954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:19:01.936402 1174954 start.go:159] libmachine.API.Create for "ha-399583" (driver="docker")
	I0929 13:19:01.936431 1174954 client.go:168] LocalClient.Create starting
	I0929 13:19:01.936496 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 13:19:01.936561 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:01.936597 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:01.936667 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 13:19:01.936692 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:01.936707 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:01.936965 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:19:01.964760 1174954 network_create.go:77] Found existing network {name:ha-399583 subnet:0x4001bf5140 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0929 13:19:01.964801 1174954 kic.go:121] calculated static IP "192.168.49.3" for the "ha-399583-m02" container
	I0929 13:19:01.964878 1174954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:19:01.990942 1174954 cli_runner.go:164] Run: docker volume create ha-399583-m02 --label name.minikube.sigs.k8s.io=ha-399583-m02 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:19:02.014409 1174954 oci.go:103] Successfully created a docker volume ha-399583-m02
	I0929 13:19:02.014494 1174954 cli_runner.go:164] Run: docker run --rm --name ha-399583-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m02 --entrypoint /usr/bin/test -v ha-399583-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:19:02.679202 1174954 oci.go:107] Successfully prepared a docker volume ha-399583-m02
	I0929 13:19:02.679231 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:02.679251 1174954 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:19:02.679325 1174954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:19:07.152375 1174954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.473014878s)
	I0929 13:19:07.152405 1174954 kic.go:203] duration metric: took 4.473150412s to extract preloaded images to volume ...
	W0929 13:19:07.152593 1174954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 13:19:07.152711 1174954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:19:07.237258 1174954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-399583-m02 --name ha-399583-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-399583-m02 --network ha-399583 --ip 192.168.49.3 --volume ha-399583-m02:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:19:07.588662 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Running}}
	I0929 13:19:07.610641 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:19:07.639217 1174954 cli_runner.go:164] Run: docker exec ha-399583-m02 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:19:07.689265 1174954 oci.go:144] the created container "ha-399583-m02" has a running status.
	I0929 13:19:07.689290 1174954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa...
	I0929 13:19:08.967590 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0929 13:19:08.967645 1174954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:19:08.993092 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:19:09.018121 1174954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:19:09.018143 1174954 kic_runner.go:114] Args: [docker exec --privileged ha-399583-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:19:09.091238 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:19:09.114872 1174954 machine.go:93] provisionDockerMachine start ...
	I0929 13:19:09.114979 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:09.138454 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:09.138781 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:09.138796 1174954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:19:09.300185 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m02
	
	I0929 13:19:09.300221 1174954 ubuntu.go:182] provisioning hostname "ha-399583-m02"
	I0929 13:19:09.300323 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:09.328396 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:09.329896 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:09.329930 1174954 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-399583-m02 && echo "ha-399583-m02" | sudo tee /etc/hostname
	I0929 13:19:09.523332 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m02
	
	I0929 13:19:09.523435 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:09.574718 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:09.575028 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:09.575051 1174954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-399583-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-399583-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-399583-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:19:09.769956 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:19:09.769989 1174954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 13:19:09.770011 1174954 ubuntu.go:190] setting up certificates
	I0929 13:19:09.770020 1174954 provision.go:84] configureAuth start
	I0929 13:19:09.770081 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m02
	I0929 13:19:09.806137 1174954 provision.go:143] copyHostCerts
	I0929 13:19:09.806189 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:19:09.806223 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 13:19:09.806235 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:19:09.806313 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 13:19:09.806397 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:19:09.806419 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 13:19:09.806425 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:19:09.806453 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 13:19:09.806504 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:19:09.806525 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 13:19:09.806536 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:19:09.806567 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 13:19:09.806620 1174954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.ha-399583-m02 san=[127.0.0.1 192.168.49.3 ha-399583-m02 localhost minikube]
	I0929 13:19:10.866535 1174954 provision.go:177] copyRemoteCerts
	I0929 13:19:10.866611 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:19:10.866660 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:10.889682 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:10.999316 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0929 13:19:10.999393 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 13:19:11.052232 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0929 13:19:11.052300 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 13:19:11.089337 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0929 13:19:11.089407 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:19:11.119732 1174954 provision.go:87] duration metric: took 1.349696847s to configureAuth
	I0929 13:19:11.119764 1174954 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:19:11.119970 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:11.120035 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:11.155123 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:11.155429 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:11.155445 1174954 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 13:19:11.339035 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 13:19:11.339054 1174954 ubuntu.go:71] root file system type: overlay
	I0929 13:19:11.339178 1174954 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 13:19:11.339247 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:11.371089 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:11.371407 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:11.371490 1174954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 13:19:11.568641 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 13:19:11.568810 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:11.600745 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:19:11.601041 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33943 <nil> <nil>}
	I0929 13:19:11.601060 1174954 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 13:19:12.995080 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 13:19:11.563331451 +0000
	@@ -9,23 +9,35 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 13:19:12.995110 1174954 machine.go:96] duration metric: took 3.880212273s to provisionDockerMachine
	I0929 13:19:12.995121 1174954 client.go:171] duration metric: took 11.058683687s to LocalClient.Create
	I0929 13:19:12.995134 1174954 start.go:167] duration metric: took 11.058732804s to libmachine.API.Create "ha-399583"
	I0929 13:19:12.995141 1174954 start.go:293] postStartSetup for "ha-399583-m02" (driver="docker")
	I0929 13:19:12.995150 1174954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:19:12.995221 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:19:12.995266 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.030954 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.144283 1174954 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:19:13.148264 1174954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:19:13.148302 1174954 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:19:13.148312 1174954 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:19:13.148319 1174954 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:19:13.148329 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 13:19:13.148388 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 13:19:13.148475 1174954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 13:19:13.148486 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /etc/ssl/certs/11276402.pem
	I0929 13:19:13.148678 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:19:13.161314 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:19:13.194420 1174954 start.go:296] duration metric: took 199.264403ms for postStartSetup
	I0929 13:19:13.194833 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m02
	I0929 13:19:13.222369 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:13.222651 1174954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:19:13.222703 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.244676 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.355271 1174954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:19:13.363160 1174954 start.go:128] duration metric: took 11.43222979s to createHost
	I0929 13:19:13.363190 1174954 start.go:83] releasing machines lock for "ha-399583-m02", held for 11.432372388s
	I0929 13:19:13.363267 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m02
	I0929 13:19:13.398288 1174954 out.go:179] * Found network options:
	I0929 13:19:13.401230 1174954 out.go:179]   - NO_PROXY=192.168.49.2
	W0929 13:19:13.404184 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:19:13.404240 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	I0929 13:19:13.404317 1174954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:19:13.404364 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.404854 1174954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:19:13.404909 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m02
	I0929 13:19:13.443677 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.453060 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33943 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m02/id_rsa Username:docker}
	I0929 13:19:13.715409 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:19:13.762331 1174954 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:19:13.762420 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:19:13.808673 1174954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:19:13.808700 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:19:13.808733 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:19:13.808819 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:19:13.850267 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:19:13.869819 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:19:13.886028 1174954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 13:19:13.886128 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 13:19:13.904204 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:19:13.918915 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:19:13.938078 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:19:13.951190 1174954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:19:13.962822 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:19:13.979866 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:19:14.002687 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:19:14.016197 1174954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:19:14.038356 1174954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:19:14.049913 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:14.222731 1174954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:19:14.395987 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:19:14.396052 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:19:14.396114 1174954 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 13:19:14.421883 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:19:14.441785 1174954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:19:14.491962 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:19:14.513917 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:19:14.536659 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:19:14.568085 1174954 ssh_runner.go:195] Run: which cri-dockerd
	I0929 13:19:14.572700 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 13:19:14.590508 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 13:19:14.625655 1174954 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 13:19:14.786001 1174954 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 13:19:14.949218 1174954 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 13:19:14.949299 1174954 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 13:19:14.970637 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 13:19:14.982306 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:15.114124 1174954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 13:19:15.862209 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:19:15.883370 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 13:19:15.901976 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:19:15.919605 1174954 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 13:19:16.090584 1174954 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 13:19:16.235805 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:16.368385 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 13:19:16.386764 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 13:19:16.399426 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:16.516458 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 13:19:16.669097 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:19:16.685845 1174954 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 13:19:16.685928 1174954 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 13:19:16.694278 1174954 start.go:563] Will wait 60s for crictl version
	I0929 13:19:16.694392 1174954 ssh_runner.go:195] Run: which crictl
	I0929 13:19:16.698498 1174954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:19:16.774768 1174954 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 13:19:16.774852 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:19:16.809187 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:19:16.863090 1174954 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 13:19:16.866031 1174954 out.go:179]   - env NO_PROXY=192.168.49.2
	I0929 13:19:16.868939 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:19:16.892673 1174954 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 13:19:16.896392 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:19:16.912994 1174954 mustload.go:65] Loading cluster: ha-399583
	I0929 13:19:16.913225 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:16.913484 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:19:16.939184 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:16.939531 1174954 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583 for IP: 192.168.49.3
	I0929 13:19:16.939547 1174954 certs.go:194] generating shared ca certs ...
	I0929 13:19:16.939579 1174954 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:16.939745 1174954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 13:19:16.939789 1174954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 13:19:16.939818 1174954 certs.go:256] generating profile certs ...
	I0929 13:19:16.939936 1174954 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key
	I0929 13:19:16.939986 1174954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547
	I0929 13:19:16.940007 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0929 13:19:17.951806 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547 ...
	I0929 13:19:17.951838 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547: {Name:mk364b0c6a477f0cee6381c4956d3d67e3f29bd0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:17.952068 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547 ...
	I0929 13:19:17.952087 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547: {Name:mk9ec6fab1a22143f857f5e99f9b70589de081fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:19:17.952187 1174954 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.6c426547 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt
	I0929 13:19:17.952320 1174954 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.6c426547 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key
	I0929 13:19:17.952454 1174954 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key
	I0929 13:19:17.952472 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0929 13:19:17.952492 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0929 13:19:17.952520 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0929 13:19:17.952532 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0929 13:19:17.952544 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0929 13:19:17.952555 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0929 13:19:17.952568 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0929 13:19:17.952585 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0929 13:19:17.952633 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 13:19:17.952665 1174954 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 13:19:17.952678 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:19:17.952701 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:19:17.952728 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:19:17.952756 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 13:19:17.952802 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:19:17.952835 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:17.952852 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem -> /usr/share/ca-certificates/1127640.pem
	I0929 13:19:17.952864 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /usr/share/ca-certificates/11276402.pem
	I0929 13:19:17.952921 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:17.978801 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:18.080915 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0929 13:19:18.089902 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0929 13:19:18.105320 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0929 13:19:18.109842 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0929 13:19:18.130354 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0929 13:19:18.135000 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0929 13:19:18.158595 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0929 13:19:18.165771 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0929 13:19:18.191203 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0929 13:19:18.199215 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0929 13:19:18.213343 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0929 13:19:18.217279 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0929 13:19:18.230828 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:19:18.259851 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:19:18.286739 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:19:18.312892 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:19:18.345383 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0929 13:19:18.372738 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:19:18.400929 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:19:18.427527 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:19:18.454148 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:19:18.481656 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 13:19:18.507564 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 13:19:18.534101 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0929 13:19:18.552624 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0929 13:19:18.572657 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0929 13:19:18.591652 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0929 13:19:18.611738 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0929 13:19:18.630245 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0929 13:19:18.649047 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0929 13:19:18.667202 1174954 ssh_runner.go:195] Run: openssl version
	I0929 13:19:18.672965 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 13:19:18.682742 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 13:19:18.693731 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 13:19:18.693795 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 13:19:18.703321 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 13:19:18.713982 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 13:19:18.723979 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 13:19:18.728151 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 13:19:18.728220 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 13:19:18.735565 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:19:18.746664 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:19:18.757113 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:18.761032 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:18.761102 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:19:18.770731 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:19:18.781204 1174954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:19:18.787911 1174954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:19:18.787974 1174954 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 docker true true} ...
	I0929 13:19:18.788060 1174954 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-399583-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:19:18.788086 1174954 kube-vip.go:115] generating kube-vip config ...
	I0929 13:19:18.788134 1174954 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0929 13:19:18.802548 1174954 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:19:18.802611 1174954 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0929 13:19:18.802674 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:19:18.814009 1174954 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:19:18.814081 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0929 13:19:18.827223 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 13:19:18.855545 1174954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:19:18.882423 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0929 13:19:18.901554 1174954 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0929 13:19:18.905614 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:19:18.917838 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:19.018807 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:19:19.037592 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:19:19.037950 1174954 start.go:317] joinCluster: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:19:19.038083 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0929 13:19:19.038202 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:19:19.060470 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:19:19.239338 1174954 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:19.239391 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token us5i83.fcf01pewvpqcb5lq --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0929 13:19:48.136076 1174954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token us5i83.fcf01pewvpqcb5lq --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (28.896663228s)
	I0929 13:19:48.136109 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0929 13:19:48.397873 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-399583-m02 minikube.k8s.io/updated_at=2025_09_29T13_19_48_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=ha-399583 minikube.k8s.io/primary=false
	I0929 13:19:48.512172 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-399583-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0929 13:19:48.620359 1174954 start.go:319] duration metric: took 29.582405213s to joinCluster
	I0929 13:19:48.620425 1174954 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:48.620755 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:48.623321 1174954 out.go:179] * Verifying Kubernetes components...
	I0929 13:19:48.626203 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:19:48.735921 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:19:48.753049 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0929 13:19:48.753125 1174954 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0929 13:19:48.753358 1174954 node_ready.go:35] waiting up to 6m0s for node "ha-399583-m02" to be "Ready" ...
	W0929 13:19:50.757486 1174954 node_ready.go:57] node "ha-399583-m02" has "Ready":"False" status (will retry)
	W0929 13:19:53.257719 1174954 node_ready.go:57] node "ha-399583-m02" has "Ready":"False" status (will retry)
	I0929 13:19:53.757841 1174954 node_ready.go:49] node "ha-399583-m02" is "Ready"
	I0929 13:19:53.757872 1174954 node_ready.go:38] duration metric: took 5.004492285s for node "ha-399583-m02" to be "Ready" ...
	I0929 13:19:53.757889 1174954 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:19:53.757950 1174954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:19:53.769589 1174954 api_server.go:72] duration metric: took 5.149124119s to wait for apiserver process to appear ...
	I0929 13:19:53.769620 1174954 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:19:53.769640 1174954 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 13:19:53.778508 1174954 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 13:19:53.779866 1174954 api_server.go:141] control plane version: v1.34.0
	I0929 13:19:53.779890 1174954 api_server.go:131] duration metric: took 10.263816ms to wait for apiserver health ...
	I0929 13:19:53.779899 1174954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:19:53.786312 1174954 system_pods.go:59] 17 kube-system pods found
	I0929 13:19:53.786353 1174954 system_pods.go:61] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:19:53.786361 1174954 system_pods.go:61] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:19:53.786371 1174954 system_pods.go:61] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:19:53.786375 1174954 system_pods.go:61] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Pending
	I0929 13:19:53.786380 1174954 system_pods.go:61] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:19:53.786387 1174954 system_pods.go:61] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-dst2d": pod kindnet-dst2d is already assigned to node "ha-399583-m02")
	I0929 13:19:53.786393 1174954 system_pods.go:61] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:19:53.786402 1174954 system_pods.go:61] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Pending
	I0929 13:19:53.786408 1174954 system_pods.go:61] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:19:53.786418 1174954 system_pods.go:61] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Pending
	I0929 13:19:53.786426 1174954 system_pods.go:61] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-2cb75": pod kube-proxy-2cb75 is already assigned to node "ha-399583-m02")
	I0929 13:19:53.786437 1174954 system_pods.go:61] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:19:53.786452 1174954 system_pods.go:61] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:19:53.786459 1174954 system_pods.go:61] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Pending
	I0929 13:19:53.786464 1174954 system_pods.go:61] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:19:53.786468 1174954 system_pods.go:61] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Pending
	I0929 13:19:53.786473 1174954 system_pods.go:61] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:19:53.786485 1174954 system_pods.go:74] duration metric: took 6.569114ms to wait for pod list to return data ...
	I0929 13:19:53.786498 1174954 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:19:53.791275 1174954 default_sa.go:45] found service account: "default"
	I0929 13:19:53.791347 1174954 default_sa.go:55] duration metric: took 4.840948ms for default service account to be created ...
	I0929 13:19:53.791374 1174954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:19:53.795778 1174954 system_pods.go:86] 17 kube-system pods found
	I0929 13:19:53.795813 1174954 system_pods.go:89] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:19:53.795820 1174954 system_pods.go:89] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:19:53.795825 1174954 system_pods.go:89] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:19:53.795829 1174954 system_pods.go:89] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Pending
	I0929 13:19:53.795833 1174954 system_pods.go:89] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:19:53.795841 1174954 system_pods.go:89] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-dst2d": pod kindnet-dst2d is already assigned to node "ha-399583-m02")
	I0929 13:19:53.795847 1174954 system_pods.go:89] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:19:53.795853 1174954 system_pods.go:89] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Pending
	I0929 13:19:53.795857 1174954 system_pods.go:89] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:19:53.795862 1174954 system_pods.go:89] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Pending
	I0929 13:19:53.795868 1174954 system_pods.go:89] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-2cb75": pod kube-proxy-2cb75 is already assigned to node "ha-399583-m02")
	I0929 13:19:53.795873 1174954 system_pods.go:89] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:19:53.795878 1174954 system_pods.go:89] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:19:53.795885 1174954 system_pods.go:89] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Pending
	I0929 13:19:53.795890 1174954 system_pods.go:89] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:19:53.795910 1174954 system_pods.go:89] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Pending
	I0929 13:19:53.795914 1174954 system_pods.go:89] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:19:53.795921 1174954 system_pods.go:126] duration metric: took 4.529075ms to wait for k8s-apps to be running ...
	I0929 13:19:53.795933 1174954 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:19:53.795993 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:19:53.808623 1174954 system_svc.go:56] duration metric: took 12.681804ms WaitForService to wait for kubelet
	I0929 13:19:53.808652 1174954 kubeadm.go:578] duration metric: took 5.188191498s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:19:53.808672 1174954 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:19:53.812712 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:19:53.812795 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:19:53.812822 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:19:53.812840 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:19:53.812872 1174954 node_conditions.go:105] duration metric: took 4.19349ms to run NodePressure ...
	I0929 13:19:53.812903 1174954 start.go:241] waiting for startup goroutines ...
	I0929 13:19:53.812960 1174954 start.go:255] writing updated cluster config ...
	I0929 13:19:53.816405 1174954 out.go:203] 
	I0929 13:19:53.819624 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:19:53.819804 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:53.823289 1174954 out.go:179] * Starting "ha-399583-m03" control-plane node in "ha-399583" cluster
	I0929 13:19:53.826271 1174954 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:19:53.830153 1174954 out.go:179] * Pulling base image v0.0.48 ...
	I0929 13:19:53.833398 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:53.833513 1174954 cache.go:58] Caching tarball of preloaded images
	I0929 13:19:53.833477 1174954 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:19:53.833852 1174954 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 13:19:53.833895 1174954 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 13:19:53.834066 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:19:53.864753 1174954 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 13:19:53.864773 1174954 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 13:19:53.864786 1174954 cache.go:232] Successfully downloaded all kic artifacts
	I0929 13:19:53.864810 1174954 start.go:360] acquireMachinesLock for ha-399583-m03: {Name:mk2b898fb28e1dbc9512aed087b03adf147176a2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 13:19:53.864913 1174954 start.go:364] duration metric: took 89.24µs to acquireMachinesLock for "ha-399583-m03"
	I0929 13:19:53.864938 1174954 start.go:93] Provisioning new machine with config: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:fals
e kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAge
ntPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:19:53.865038 1174954 start.go:125] createHost starting for "m03" (driver="docker")
	I0929 13:19:53.868470 1174954 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 13:19:53.868592 1174954 start.go:159] libmachine.API.Create for "ha-399583" (driver="docker")
	I0929 13:19:53.868621 1174954 client.go:168] LocalClient.Create starting
	I0929 13:19:53.868686 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 13:19:53.868719 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:53.868732 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:53.868785 1174954 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 13:19:53.868806 1174954 main.go:141] libmachine: Decoding PEM data...
	I0929 13:19:53.868817 1174954 main.go:141] libmachine: Parsing certificate...
	I0929 13:19:53.869050 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:19:53.890019 1174954 network_create.go:77] Found existing network {name:ha-399583 subnet:0x400015a330 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0929 13:19:53.890056 1174954 kic.go:121] calculated static IP "192.168.49.4" for the "ha-399583-m03" container
	I0929 13:19:53.890345 1174954 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 13:19:53.910039 1174954 cli_runner.go:164] Run: docker volume create ha-399583-m03 --label name.minikube.sigs.k8s.io=ha-399583-m03 --label created_by.minikube.sigs.k8s.io=true
	I0929 13:19:53.933513 1174954 oci.go:103] Successfully created a docker volume ha-399583-m03
	I0929 13:19:53.933599 1174954 cli_runner.go:164] Run: docker run --rm --name ha-399583-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m03 --entrypoint /usr/bin/test -v ha-399583-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 13:19:54.624336 1174954 oci.go:107] Successfully prepared a docker volume ha-399583-m03
	I0929 13:19:54.624378 1174954 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:19:54.624399 1174954 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 13:19:54.624489 1174954 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 13:19:58.913405 1174954 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v ha-399583-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.288878633s)
	I0929 13:19:58.913439 1174954 kic.go:203] duration metric: took 4.289036076s to extract preloaded images to volume ...
	W0929 13:19:58.913581 1174954 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 13:19:58.913697 1174954 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 13:19:59.026434 1174954 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-399583-m03 --name ha-399583-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-399583-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-399583-m03 --network ha-399583 --ip 192.168.49.4 --volume ha-399583-m03:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 13:19:59.421197 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Running}}
	I0929 13:19:59.449045 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Status}}
	I0929 13:19:59.475768 1174954 cli_runner.go:164] Run: docker exec ha-399583-m03 stat /var/lib/dpkg/alternatives/iptables
	I0929 13:19:59.539552 1174954 oci.go:144] the created container "ha-399583-m03" has a running status.
	I0929 13:19:59.539579 1174954 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa...
	I0929 13:20:00.165439 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0929 13:20:00.165493 1174954 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 13:20:00.230597 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Status}}
	I0929 13:20:00.279749 1174954 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 13:20:00.279772 1174954 kic_runner.go:114] Args: [docker exec --privileged ha-399583-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 13:20:00.497905 1174954 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Status}}
	I0929 13:20:00.541566 1174954 machine.go:93] provisionDockerMachine start ...
	I0929 13:20:00.541686 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:00.580321 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:00.580713 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:00.580737 1174954 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 13:20:00.832036 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m03
	
	I0929 13:20:00.832064 1174954 ubuntu.go:182] provisioning hostname "ha-399583-m03"
	I0929 13:20:00.832134 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:00.861349 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:00.861679 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:00.861696 1174954 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-399583-m03 && echo "ha-399583-m03" | sudo tee /etc/hostname
	I0929 13:20:01.047493 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-399583-m03
	
	I0929 13:20:01.047588 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:01.079010 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:01.079315 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:01.079337 1174954 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-399583-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-399583-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-399583-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 13:20:01.243015 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 13:20:01.243044 1174954 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 13:20:01.243061 1174954 ubuntu.go:190] setting up certificates
	I0929 13:20:01.243072 1174954 provision.go:84] configureAuth start
	I0929 13:20:01.243139 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m03
	I0929 13:20:01.265242 1174954 provision.go:143] copyHostCerts
	I0929 13:20:01.265290 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:20:01.265326 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 13:20:01.265341 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 13:20:01.265419 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 13:20:01.265507 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:20:01.265532 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 13:20:01.265539 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 13:20:01.265580 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 13:20:01.265627 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:20:01.265649 1174954 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 13:20:01.265656 1174954 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 13:20:01.265681 1174954 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 13:20:01.265732 1174954 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.ha-399583-m03 san=[127.0.0.1 192.168.49.4 ha-399583-m03 localhost minikube]
	I0929 13:20:02.210993 1174954 provision.go:177] copyRemoteCerts
	I0929 13:20:02.211070 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 13:20:02.211117 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.235070 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:02.342243 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0929 13:20:02.342309 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 13:20:02.370693 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0929 13:20:02.370758 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 13:20:02.406117 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0929 13:20:02.406193 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 13:20:02.433664 1174954 provision.go:87] duration metric: took 1.190577158s to configureAuth
	I0929 13:20:02.433695 1174954 ubuntu.go:206] setting minikube options for container-runtime
	I0929 13:20:02.433929 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:20:02.433990 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.452035 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:02.452357 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:02.452371 1174954 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 13:20:02.597297 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 13:20:02.597322 1174954 ubuntu.go:71] root file system type: overlay
	I0929 13:20:02.597429 1174954 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 13:20:02.597505 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.616941 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:02.617971 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:02.618086 1174954 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment="NO_PROXY=192.168.49.2"
	Environment="NO_PROXY=192.168.49.2,192.168.49.3"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 13:20:02.779351 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	Environment=NO_PROXY=192.168.49.2
	Environment=NO_PROXY=192.168.49.2,192.168.49.3
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 13:20:02.779455 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:02.799223 1174954 main.go:141] libmachine: Using SSH client type: native
	I0929 13:20:02.799534 1174954 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 33948 <nil> <nil>}
	I0929 13:20:02.799557 1174954 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 13:20:03.735315 1174954 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 13:20:02.775888663 +0000
	@@ -9,23 +9,36 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+Environment=NO_PROXY=192.168.49.2
	+Environment=NO_PROXY=192.168.49.2,192.168.49.3
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 13:20:03.735352 1174954 machine.go:96] duration metric: took 3.193757868s to provisionDockerMachine
	I0929 13:20:03.735363 1174954 client.go:171] duration metric: took 9.866735605s to LocalClient.Create
	I0929 13:20:03.735376 1174954 start.go:167] duration metric: took 9.866785559s to libmachine.API.Create "ha-399583"
	I0929 13:20:03.735383 1174954 start.go:293] postStartSetup for "ha-399583-m03" (driver="docker")
	I0929 13:20:03.735394 1174954 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 13:20:03.735469 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 13:20:03.735514 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:03.756038 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:03.865155 1174954 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 13:20:03.869100 1174954 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 13:20:03.869131 1174954 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 13:20:03.869150 1174954 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 13:20:03.869157 1174954 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 13:20:03.869167 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 13:20:03.869229 1174954 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 13:20:03.869304 1174954 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 13:20:03.869311 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /etc/ssl/certs/11276402.pem
	I0929 13:20:03.869412 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 13:20:03.879291 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:20:03.910042 1174954 start.go:296] duration metric: took 174.64236ms for postStartSetup
	I0929 13:20:03.910412 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m03
	I0929 13:20:03.929549 1174954 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/config.json ...
	I0929 13:20:03.929853 1174954 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:20:03.929910 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:03.946942 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:04.045660 1174954 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 13:20:04.050696 1174954 start.go:128] duration metric: took 10.185641501s to createHost
	I0929 13:20:04.050721 1174954 start.go:83] releasing machines lock for "ha-399583-m03", held for 10.185799967s
	I0929 13:20:04.050794 1174954 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m03
	I0929 13:20:04.072117 1174954 out.go:179] * Found network options:
	I0929 13:20:04.075044 1174954 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0929 13:20:04.078038 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:20:04.078065 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:20:04.078091 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	W0929 13:20:04.078104 1174954 proxy.go:120] fail to check proxy env: Error ip not in block
	I0929 13:20:04.078176 1174954 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 13:20:04.078218 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:04.078237 1174954 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 13:20:04.078291 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:20:04.097469 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:04.098230 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:20:04.193075 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 13:20:04.335686 1174954 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 13:20:04.335795 1174954 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 13:20:04.368922 1174954 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 13:20:04.369007 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:20:04.369068 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:20:04.369232 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:20:04.387484 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 13:20:04.398856 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 13:20:04.409189 1174954 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 13:20:04.409264 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 13:20:04.419854 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:20:04.430371 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 13:20:04.440576 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 13:20:04.451380 1174954 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 13:20:04.461631 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 13:20:04.472007 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 13:20:04.481706 1174954 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 13:20:04.491487 1174954 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 13:20:04.499820 1174954 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 13:20:04.508580 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:04.598502 1174954 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 13:20:04.700077 1174954 start.go:495] detecting cgroup driver to use...
	I0929 13:20:04.700124 1174954 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 13:20:04.700177 1174954 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 13:20:04.714663 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:20:04.729615 1174954 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 13:20:04.775282 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 13:20:04.788800 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 13:20:04.801856 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 13:20:04.819869 1174954 ssh_runner.go:195] Run: which cri-dockerd
	I0929 13:20:04.825237 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 13:20:04.837176 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 13:20:04.856412 1174954 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 13:20:04.956606 1174954 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 13:20:05.046952 1174954 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 13:20:05.047052 1174954 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 13:20:05.068496 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 13:20:05.081179 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:05.188442 1174954 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 13:20:05.641030 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 13:20:05.658091 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 13:20:05.674159 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:20:05.688899 1174954 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 13:20:05.794834 1174954 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 13:20:05.898750 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:06.004134 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 13:20:06.021207 1174954 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 13:20:06.033824 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:06.131795 1174954 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 13:20:06.204688 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 13:20:06.217218 1174954 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 13:20:06.217300 1174954 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 13:20:06.230161 1174954 start.go:563] Will wait 60s for crictl version
	I0929 13:20:06.230229 1174954 ssh_runner.go:195] Run: which crictl
	I0929 13:20:06.233825 1174954 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 13:20:06.276251 1174954 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 13:20:06.276321 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:20:06.300978 1174954 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 13:20:06.340349 1174954 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 13:20:06.343223 1174954 out.go:179]   - env NO_PROXY=192.168.49.2
	I0929 13:20:06.346129 1174954 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0929 13:20:06.349043 1174954 cli_runner.go:164] Run: docker network inspect ha-399583 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 13:20:06.377719 1174954 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0929 13:20:06.388776 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:20:06.411154 1174954 mustload.go:65] Loading cluster: ha-399583
	I0929 13:20:06.411394 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:20:06.411640 1174954 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:20:06.430528 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:20:06.430841 1174954 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583 for IP: 192.168.49.4
	I0929 13:20:06.430850 1174954 certs.go:194] generating shared ca certs ...
	I0929 13:20:06.430866 1174954 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:20:06.430997 1174954 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 13:20:06.431058 1174954 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 13:20:06.431074 1174954 certs.go:256] generating profile certs ...
	I0929 13:20:06.431168 1174954 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key
	I0929 13:20:06.431196 1174954 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa
	I0929 13:20:06.431210 1174954 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0929 13:20:06.888217 1174954 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa ...
	I0929 13:20:06.888265 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa: {Name:mk683375c282b9fb5dafe4bb714d1d87fd779b52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:20:06.888467 1174954 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa ...
	I0929 13:20:06.888487 1174954 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa: {Name:mk032598528855acdbae9e710bee9e27a0f4170b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 13:20:06.888608 1174954 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt.416eddfa -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt
	I0929 13:20:06.888742 1174954 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key.416eddfa -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key
	I0929 13:20:06.888881 1174954 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key
	I0929 13:20:06.888899 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0929 13:20:06.888916 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0929 13:20:06.888932 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0929 13:20:06.888946 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0929 13:20:06.888962 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0929 13:20:06.888979 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0929 13:20:06.888996 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0929 13:20:06.889007 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0929 13:20:06.889078 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 13:20:06.889110 1174954 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 13:20:06.889126 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 13:20:06.889150 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 13:20:06.889179 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 13:20:06.889206 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 13:20:06.889252 1174954 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 13:20:06.889284 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem -> /usr/share/ca-certificates/1127640.pem
	I0929 13:20:06.889299 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> /usr/share/ca-certificates/11276402.pem
	I0929 13:20:06.889311 1174954 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:06.889372 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:20:06.907926 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:20:07.012906 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0929 13:20:07.017271 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0929 13:20:07.030791 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0929 13:20:07.034486 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0929 13:20:07.047662 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0929 13:20:07.051627 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0929 13:20:07.065782 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0929 13:20:07.069487 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0929 13:20:07.082405 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0929 13:20:07.086808 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0929 13:20:07.099378 1174954 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0929 13:20:07.102876 1174954 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0929 13:20:07.115722 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 13:20:07.144712 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 13:20:07.171578 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 13:20:07.196661 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 13:20:07.227601 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0929 13:20:07.261933 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 13:20:07.288317 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 13:20:07.313720 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 13:20:07.351784 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 13:20:07.378975 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 13:20:07.409020 1174954 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 13:20:07.435100 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0929 13:20:07.455821 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0929 13:20:07.476986 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0929 13:20:07.496799 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0929 13:20:07.515486 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0929 13:20:07.534374 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0929 13:20:07.552866 1174954 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0929 13:20:07.573277 1174954 ssh_runner.go:195] Run: openssl version
	I0929 13:20:07.578770 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 13:20:07.588964 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 13:20:07.592719 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 13:20:07.592798 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 13:20:07.599718 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 13:20:07.609344 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 13:20:07.618729 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 13:20:07.622461 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 13:20:07.622531 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 13:20:07.630086 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 13:20:07.640458 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 13:20:07.650309 1174954 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:07.655064 1174954 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:07.655164 1174954 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 13:20:07.662942 1174954 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 13:20:07.673946 1174954 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 13:20:07.677370 1174954 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 13:20:07.677449 1174954 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 docker true true} ...
	I0929 13:20:07.677546 1174954 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-399583-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 13:20:07.677575 1174954 kube-vip.go:115] generating kube-vip config ...
	I0929 13:20:07.677629 1174954 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0929 13:20:07.691348 1174954 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0929 13:20:07.691407 1174954 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0929 13:20:07.691467 1174954 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 13:20:07.700828 1174954 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 13:20:07.700902 1174954 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0929 13:20:07.709803 1174954 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 13:20:07.732991 1174954 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 13:20:07.756090 1174954 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0929 13:20:07.780455 1174954 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0929 13:20:07.783999 1174954 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 13:20:07.795880 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:07.897249 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:20:07.917136 1174954 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:20:07.917408 1174954 start.go:317] joinCluster: &{Name:ha-399583 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-399583 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:f
alse logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs
: AutoPauseInterval:1m0s}
	I0929 13:20:07.917533 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0929 13:20:07.917588 1174954 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:20:07.943812 1174954 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:20:08.133033 1174954 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:20:08.133084 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token swip8t.e0aypxfy2bq39z8n --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0929 13:20:30.840201 1174954 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token swip8t.e0aypxfy2bq39z8n --discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-399583-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (22.707096684s)
	I0929 13:20:30.840228 1174954 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0929 13:20:31.161795 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-399583-m03 minikube.k8s.io/updated_at=2025_09_29T13_20_31_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=ha-399583 minikube.k8s.io/primary=false
	I0929 13:20:31.312820 1174954 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-399583-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0929 13:20:31.514629 1174954 start.go:319] duration metric: took 23.597216979s to joinCluster
	I0929 13:20:31.514697 1174954 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 13:20:31.515097 1174954 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:20:31.517922 1174954 out.go:179] * Verifying Kubernetes components...
	I0929 13:20:31.520826 1174954 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 13:20:31.639377 1174954 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 13:20:31.654322 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0929 13:20:31.654409 1174954 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0929 13:20:31.654714 1174954 node_ready.go:35] waiting up to 6m0s for node "ha-399583-m03" to be "Ready" ...
	W0929 13:20:33.658813 1174954 node_ready.go:57] node "ha-399583-m03" has "Ready":"False" status (will retry)
	W0929 13:20:35.658996 1174954 node_ready.go:57] node "ha-399583-m03" has "Ready":"False" status (will retry)
	I0929 13:20:37.662695 1174954 node_ready.go:49] node "ha-399583-m03" is "Ready"
	I0929 13:20:37.662721 1174954 node_ready.go:38] duration metric: took 6.007985311s for node "ha-399583-m03" to be "Ready" ...
	I0929 13:20:37.662738 1174954 api_server.go:52] waiting for apiserver process to appear ...
	I0929 13:20:37.662801 1174954 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:20:37.678329 1174954 api_server.go:72] duration metric: took 6.163563537s to wait for apiserver process to appear ...
	I0929 13:20:37.678353 1174954 api_server.go:88] waiting for apiserver healthz status ...
	I0929 13:20:37.678372 1174954 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0929 13:20:37.687228 1174954 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0929 13:20:37.688407 1174954 api_server.go:141] control plane version: v1.34.0
	I0929 13:20:37.688435 1174954 api_server.go:131] duration metric: took 10.075523ms to wait for apiserver health ...
	I0929 13:20:37.688446 1174954 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 13:20:37.698097 1174954 system_pods.go:59] 26 kube-system pods found
	I0929 13:20:37.698143 1174954 system_pods.go:61] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:20:37.698150 1174954 system_pods.go:61] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:20:37.698155 1174954 system_pods.go:61] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:20:37.698159 1174954 system_pods.go:61] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Running
	I0929 13:20:37.698163 1174954 system_pods.go:61] "etcd-ha-399583-m03" [298d72e2-060d-4074-8a25-cfc31af03292] Pending
	I0929 13:20:37.698169 1174954 system_pods.go:61] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:20:37.698174 1174954 system_pods.go:61] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Running
	I0929 13:20:37.698183 1174954 system_pods.go:61] "kindnet-kdnjz" [f4e5b82e-c2b4-4626-9ad2-6133725cd817] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kdnjz": pod kindnet-kdnjz is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698196 1174954 system_pods.go:61] "kindnet-kvb6m" [da918fb5-7c31-41f6-9ea5-63dbb244c5e8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kvb6m": pod kindnet-kvb6m is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698212 1174954 system_pods.go:61] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:20:37.698217 1174954 system_pods.go:61] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Running
	I0929 13:20:37.698222 1174954 system_pods.go:61] "kube-apiserver-ha-399583-m03" [7ce088c0-c8d6-4bbb-9a95-f8600716104a] Pending
	I0929 13:20:37.698226 1174954 system_pods.go:61] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:20:37.698236 1174954 system_pods.go:61] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Running
	I0929 13:20:37.698242 1174954 system_pods.go:61] "kube-controller-manager-ha-399583-m03" [da73157c-5019-4406-8e9a-fad730cbf2e1] Pending
	I0929 13:20:37.698247 1174954 system_pods.go:61] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Running
	I0929 13:20:37.698259 1174954 system_pods.go:61] "kube-proxy-cpdlp" [9ba5e634-5db2-4592-98d3-cd8afa30cf47] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-cpdlp": pod kube-proxy-cpdlp is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698264 1174954 system_pods.go:61] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:20:37.698270 1174954 system_pods.go:61] "kube-proxy-sntfr" [0c4e5a57-5d35-4f9c-aaa5-2f0ba1d88138] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-sntfr": pod kube-proxy-sntfr is already assigned to node "ha-399583-m03")
	I0929 13:20:37.698275 1174954 system_pods.go:61] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:20:37.698283 1174954 system_pods.go:61] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Running
	I0929 13:20:37.698288 1174954 system_pods.go:61] "kube-scheduler-ha-399583-m03" [3484019f-984c-41da-9b65-5ce66f587a8b] Pending
	I0929 13:20:37.698292 1174954 system_pods.go:61] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:20:37.698304 1174954 system_pods.go:61] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Running
	I0929 13:20:37.698308 1174954 system_pods.go:61] "kube-vip-ha-399583-m03" [7cab35ed-6974-4a9f-8a92-bb53e3846a72] Pending
	I0929 13:20:37.698312 1174954 system_pods.go:61] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:20:37.698324 1174954 system_pods.go:74] duration metric: took 9.871631ms to wait for pod list to return data ...
	I0929 13:20:37.698332 1174954 default_sa.go:34] waiting for default service account to be created ...
	I0929 13:20:37.701677 1174954 default_sa.go:45] found service account: "default"
	I0929 13:20:37.701703 1174954 default_sa.go:55] duration metric: took 3.360619ms for default service account to be created ...
	I0929 13:20:37.701713 1174954 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 13:20:37.707400 1174954 system_pods.go:86] 26 kube-system pods found
	I0929 13:20:37.707438 1174954 system_pods.go:89] "coredns-66bc5c9577-5dqqj" [8f0fb99f-7e4a-493f-b70f-40f31bcab4d4] Running
	I0929 13:20:37.707446 1174954 system_pods.go:89] "coredns-66bc5c9577-p6v89" [3dba7282-54c9-4cf8-acd8-64548b982b4e] Running
	I0929 13:20:37.707451 1174954 system_pods.go:89] "etcd-ha-399583" [3ea005e3-9669-4b7f-98e5-a3692b0c0343] Running
	I0929 13:20:37.707455 1174954 system_pods.go:89] "etcd-ha-399583-m02" [9ba091fd-eec6-44a2-b787-f1f9d65f9362] Running
	I0929 13:20:37.707459 1174954 system_pods.go:89] "etcd-ha-399583-m03" [298d72e2-060d-4074-8a25-cfc31af03292] Pending
	I0929 13:20:37.707463 1174954 system_pods.go:89] "kindnet-552n5" [c90d340a-8259-46ca-8ade-1a0b40030268] Running
	I0929 13:20:37.707468 1174954 system_pods.go:89] "kindnet-dst2d" [2786bef1-c109-449d-ad17-805dd8f59f16] Running
	I0929 13:20:37.707474 1174954 system_pods.go:89] "kindnet-kdnjz" [f4e5b82e-c2b4-4626-9ad2-6133725cd817] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kdnjz": pod kindnet-kdnjz is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707481 1174954 system_pods.go:89] "kindnet-kvb6m" [da918fb5-7c31-41f6-9ea5-63dbb244c5e8] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kindnet-kvb6m": pod kindnet-kvb6m is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707486 1174954 system_pods.go:89] "kube-apiserver-ha-399583" [bc7d6fe4-455b-4daa-8f7e-a7f64256e04f] Running
	I0929 13:20:37.707493 1174954 system_pods.go:89] "kube-apiserver-ha-399583-m02" [1efc9e70-f594-43f6-983a-fbc8872669de] Running
	I0929 13:20:37.707501 1174954 system_pods.go:89] "kube-apiserver-ha-399583-m03" [7ce088c0-c8d6-4bbb-9a95-f8600716104a] Pending
	I0929 13:20:37.707505 1174954 system_pods.go:89] "kube-controller-manager-ha-399583" [c034b62f-f349-480f-a0e8-9dadb8cf3271] Running
	I0929 13:20:37.707509 1174954 system_pods.go:89] "kube-controller-manager-ha-399583-m02" [0a817e7c-accd-49b5-b37c-b247802a40de] Running
	I0929 13:20:37.707513 1174954 system_pods.go:89] "kube-controller-manager-ha-399583-m03" [da73157c-5019-4406-8e9a-fad730cbf2e1] Pending
	I0929 13:20:37.707519 1174954 system_pods.go:89] "kube-proxy-2cb75" [9bedc440-6814-4d94-8c20-43960dcf6a3e] Running
	I0929 13:20:37.707525 1174954 system_pods.go:89] "kube-proxy-cpdlp" [9ba5e634-5db2-4592-98d3-cd8afa30cf47] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-cpdlp": pod kube-proxy-cpdlp is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707535 1174954 system_pods.go:89] "kube-proxy-s2d46" [56cb5a11-c68a-45b2-af1f-8211c2f3baf5] Running
	I0929 13:20:37.707542 1174954 system_pods.go:89] "kube-proxy-sntfr" [0c4e5a57-5d35-4f9c-aaa5-2f0ba1d88138] Pending: PodScheduled:SchedulerError (running Bind plugin "DefaultBinder": Operation cannot be fulfilled on pods/binding "kube-proxy-sntfr": pod kube-proxy-sntfr is already assigned to node "ha-399583-m03")
	I0929 13:20:37.707546 1174954 system_pods.go:89] "kube-scheduler-ha-399583" [069ff250-ab03-4718-8694-05ba94ef46aa] Running
	I0929 13:20:37.707553 1174954 system_pods.go:89] "kube-scheduler-ha-399583-m02" [fc1b4c16-9849-4fcf-ab34-227630e4991b] Running
	I0929 13:20:37.707561 1174954 system_pods.go:89] "kube-scheduler-ha-399583-m03" [3484019f-984c-41da-9b65-5ce66f587a8b] Pending
	I0929 13:20:37.707569 1174954 system_pods.go:89] "kube-vip-ha-399583" [36f87183-b427-4b90-96b5-37f5b816c1b1] Running
	I0929 13:20:37.707573 1174954 system_pods.go:89] "kube-vip-ha-399583-m02" [4ba43fb8-0080-4909-80ab-9577ed9a03cb] Running
	I0929 13:20:37.707579 1174954 system_pods.go:89] "kube-vip-ha-399583-m03" [7cab35ed-6974-4a9f-8a92-bb53e3846a72] Pending
	I0929 13:20:37.707585 1174954 system_pods.go:89] "storage-provisioner" [5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab] Running
	I0929 13:20:37.707595 1174954 system_pods.go:126] duration metric: took 5.876446ms to wait for k8s-apps to be running ...
	I0929 13:20:37.707604 1174954 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 13:20:37.707665 1174954 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:20:37.722604 1174954 system_svc.go:56] duration metric: took 14.990547ms WaitForService to wait for kubelet
	I0929 13:20:37.722631 1174954 kubeadm.go:578] duration metric: took 6.207905788s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 13:20:37.722658 1174954 node_conditions.go:102] verifying NodePressure condition ...
	I0929 13:20:37.726306 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:20:37.726334 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:20:37.726358 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:20:37.726363 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:20:37.726368 1174954 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 13:20:37.726373 1174954 node_conditions.go:123] node cpu capacity is 2
	I0929 13:20:37.726378 1174954 node_conditions.go:105] duration metric: took 3.713854ms to run NodePressure ...
	I0929 13:20:37.726391 1174954 start.go:241] waiting for startup goroutines ...
	I0929 13:20:37.726413 1174954 start.go:255] writing updated cluster config ...
	I0929 13:20:37.726766 1174954 ssh_runner.go:195] Run: rm -f paused
	I0929 13:20:37.730586 1174954 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:20:37.731091 1174954 kapi.go:59] client config for ha-399583: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.crt", KeyFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/ha-399583/client.key", CAFile:"/home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]strin
g(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x20f8010), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0929 13:20:37.752334 1174954 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5dqqj" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.759262 1174954 pod_ready.go:94] pod "coredns-66bc5c9577-5dqqj" is "Ready"
	I0929 13:20:37.759286 1174954 pod_ready.go:86] duration metric: took 6.832295ms for pod "coredns-66bc5c9577-5dqqj" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.759296 1174954 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-p6v89" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.765860 1174954 pod_ready.go:94] pod "coredns-66bc5c9577-p6v89" is "Ready"
	I0929 13:20:37.765885 1174954 pod_ready.go:86] duration metric: took 6.583186ms for pod "coredns-66bc5c9577-p6v89" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.769329 1174954 pod_ready.go:83] waiting for pod "etcd-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.776112 1174954 pod_ready.go:94] pod "etcd-ha-399583" is "Ready"
	I0929 13:20:37.776142 1174954 pod_ready.go:86] duration metric: took 6.782999ms for pod "etcd-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.776160 1174954 pod_ready.go:83] waiting for pod "etcd-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.784277 1174954 pod_ready.go:94] pod "etcd-ha-399583-m02" is "Ready"
	I0929 13:20:37.784347 1174954 pod_ready.go:86] duration metric: took 8.177065ms for pod "etcd-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.784371 1174954 pod_ready.go:83] waiting for pod "etcd-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:37.931760 1174954 request.go:683] "Waited before sending request" delay="147.239674ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-399583-m03"
	I0929 13:20:38.131779 1174954 request.go:683] "Waited before sending request" delay="196.159173ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:38.532543 1174954 request.go:683] "Waited before sending request" delay="195.316686ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:38.535802 1174954 pod_ready.go:94] pod "etcd-ha-399583-m03" is "Ready"
	I0929 13:20:38.535831 1174954 pod_ready.go:86] duration metric: took 751.441723ms for pod "etcd-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:38.732127 1174954 request.go:683] "Waited before sending request" delay="196.195588ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0929 13:20:38.736197 1174954 pod_ready.go:83] waiting for pod "kube-apiserver-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:38.932435 1174954 request.go:683] "Waited before sending request" delay="196.132523ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-399583"
	I0929 13:20:39.132519 1174954 request.go:683] "Waited before sending request" delay="196.441794ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583"
	I0929 13:20:39.136200 1174954 pod_ready.go:94] pod "kube-apiserver-ha-399583" is "Ready"
	I0929 13:20:39.136264 1174954 pod_ready.go:86] duration metric: took 400.038595ms for pod "kube-apiserver-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.136279 1174954 pod_ready.go:83] waiting for pod "kube-apiserver-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.332607 1174954 request.go:683] "Waited before sending request" delay="196.169086ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-399583-m02"
	I0929 13:20:39.532382 1174954 request.go:683] "Waited before sending request" delay="194.191784ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m02"
	I0929 13:20:39.553510 1174954 pod_ready.go:94] pod "kube-apiserver-ha-399583-m02" is "Ready"
	I0929 13:20:39.553537 1174954 pod_ready.go:86] duration metric: took 417.249893ms for pod "kube-apiserver-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.553548 1174954 pod_ready.go:83] waiting for pod "kube-apiserver-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:39.731945 1174954 request.go:683] "Waited before sending request" delay="178.300272ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-399583-m03"
	I0929 13:20:39.932415 1174954 request.go:683] "Waited before sending request" delay="196.153717ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:39.936788 1174954 pod_ready.go:94] pod "kube-apiserver-ha-399583-m03" is "Ready"
	I0929 13:20:39.936817 1174954 pod_ready.go:86] duration metric: took 383.262142ms for pod "kube-apiserver-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.132200 1174954 request.go:683] "Waited before sending request" delay="195.243602ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0929 13:20:40.136932 1174954 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.332188 1174954 request.go:683] "Waited before sending request" delay="195.095556ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-399583"
	I0929 13:20:40.532350 1174954 request.go:683] "Waited before sending request" delay="194.129072ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583"
	I0929 13:20:40.535503 1174954 pod_ready.go:94] pod "kube-controller-manager-ha-399583" is "Ready"
	I0929 13:20:40.535532 1174954 pod_ready.go:86] duration metric: took 398.517332ms for pod "kube-controller-manager-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.535543 1174954 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.731836 1174954 request.go:683] "Waited before sending request" delay="196.209078ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-399583-m02"
	I0929 13:20:40.931591 1174954 request.go:683] "Waited before sending request" delay="196.130087ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m02"
	I0929 13:20:40.937829 1174954 pod_ready.go:94] pod "kube-controller-manager-ha-399583-m02" is "Ready"
	I0929 13:20:40.937859 1174954 pod_ready.go:86] duration metric: took 402.309636ms for pod "kube-controller-manager-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:40.937869 1174954 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.132415 1174954 request.go:683] "Waited before sending request" delay="194.463278ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-399583-m03"
	I0929 13:20:41.332667 1174954 request.go:683] "Waited before sending request" delay="195.238162ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:41.335801 1174954 pod_ready.go:94] pod "kube-controller-manager-ha-399583-m03" is "Ready"
	I0929 13:20:41.335830 1174954 pod_ready.go:86] duration metric: took 397.953158ms for pod "kube-controller-manager-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.532274 1174954 request.go:683] "Waited before sending request" delay="196.318601ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0929 13:20:41.536572 1174954 pod_ready.go:83] waiting for pod "kube-proxy-2cb75" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.732029 1174954 request.go:683] "Waited before sending request" delay="195.329363ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2cb75"
	I0929 13:20:41.931901 1174954 request.go:683] "Waited before sending request" delay="196.155875ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m02"
	I0929 13:20:41.935289 1174954 pod_ready.go:94] pod "kube-proxy-2cb75" is "Ready"
	I0929 13:20:41.935368 1174954 pod_ready.go:86] duration metric: took 398.750909ms for pod "kube-proxy-2cb75" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:41.935391 1174954 pod_ready.go:83] waiting for pod "kube-proxy-s2d46" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:42.131780 1174954 request.go:683] "Waited before sending request" delay="196.277501ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s2d46"
	I0929 13:20:42.331524 1174954 request.go:683] "Waited before sending request" delay="194.302628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583"
	I0929 13:20:42.348523 1174954 pod_ready.go:94] pod "kube-proxy-s2d46" is "Ready"
	I0929 13:20:42.348592 1174954 pod_ready.go:86] duration metric: took 413.180326ms for pod "kube-proxy-s2d46" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:42.348616 1174954 pod_ready.go:83] waiting for pod "kube-proxy-sntfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:42.532073 1174954 request.go:683] "Waited before sending request" delay="183.332162ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sntfr"
	I0929 13:20:42.732175 1174954 request.go:683] "Waited before sending request" delay="196.130627ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:42.931871 1174954 request.go:683] "Waited before sending request" delay="82.217461ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sntfr"
	I0929 13:20:43.131847 1174954 request.go:683] "Waited before sending request" delay="196.336324ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:43.532015 1174954 request.go:683] "Waited before sending request" delay="179.206866ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	I0929 13:20:43.932030 1174954 request.go:683] "Waited before sending request" delay="79.164046ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-399583-m03"
	W0929 13:20:44.355878 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	W0929 13:20:46.362030 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	W0929 13:20:48.855313 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	W0929 13:20:50.855820 1174954 pod_ready.go:104] pod "kube-proxy-sntfr" is not "Ready", error: <nil>
	I0929 13:20:51.855721 1174954 pod_ready.go:94] pod "kube-proxy-sntfr" is "Ready"
	I0929 13:20:51.855753 1174954 pod_ready.go:86] duration metric: took 9.507117999s for pod "kube-proxy-sntfr" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.860987 1174954 pod_ready.go:83] waiting for pod "kube-scheduler-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.869018 1174954 pod_ready.go:94] pod "kube-scheduler-ha-399583" is "Ready"
	I0929 13:20:51.869050 1174954 pod_ready.go:86] duration metric: took 8.033039ms for pod "kube-scheduler-ha-399583" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.869059 1174954 pod_ready.go:83] waiting for pod "kube-scheduler-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.877266 1174954 pod_ready.go:94] pod "kube-scheduler-ha-399583-m02" is "Ready"
	I0929 13:20:51.877293 1174954 pod_ready.go:86] duration metric: took 8.227437ms for pod "kube-scheduler-ha-399583-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.877303 1174954 pod_ready.go:83] waiting for pod "kube-scheduler-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.882444 1174954 pod_ready.go:94] pod "kube-scheduler-ha-399583-m03" is "Ready"
	I0929 13:20:51.882471 1174954 pod_ready.go:86] duration metric: took 5.161484ms for pod "kube-scheduler-ha-399583-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 13:20:51.882483 1174954 pod_ready.go:40] duration metric: took 14.15186681s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 13:20:51.957469 1174954 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 13:20:51.962421 1174954 out.go:179] * Done! kubectl is now configured to use "ha-399583" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 13:19:04 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:04Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:04 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:04Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:05 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:05Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:06 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:06Z" level=info msg="Stop pulling image docker.io/kindest/kindnetd:v20250512-df8de77b: Status: Downloaded newer image for kindest/kindnetd:v20250512-df8de77b"
	Sep 29 13:19:09 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:09Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Sep 29 13:19:16 ha-399583 dockerd[1187]: time="2025-09-29T13:19:16.675094787Z" level=info msg="ignoring event" container=f9a485d796f1697bb95b77b506d6d7d33a25885377c6842c14c0361eeaa21499 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:16 ha-399583 dockerd[1187]: time="2025-09-29T13:19:16.868803553Z" level=info msg="ignoring event" container=6e602a051efa9808202ff5e0a632206364d9f55dc499d3b6560233b6b121e69c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:17 ha-399583 dockerd[1187]: time="2025-09-29T13:19:17.303396457Z" level=info msg="ignoring event" container=e1ae11a45d2ff19e6c97670cfafd46212633ec26395d6693473ad110b077e269 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/6c2aec96a1a19b5b0a1ac112841a4e3b12f107c874d56c4cd9ffa6e933696aa0/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:17 ha-399583 dockerd[1187]: time="2025-09-29T13:19:17.521948603Z" level=info msg="ignoring event" container=053eae7f968bc8920259052b979365028efdf5b6724575a3a95323877965773b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:17 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:17Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/43b7c0b16072c37f6e6d3559eb5698c9f76cb94808a04f73835d951122fee25b/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 29 13:19:18 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:18Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:18 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:18Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-5dqqj_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:18 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:18Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-66bc5c9577-p6v89_kube-system\": unexpected command output Device \"eth0\" does not exist.\n with error: exit status 1"
	Sep 29 13:19:30 ha-399583 dockerd[1187]: time="2025-09-29T13:19:30.781289164Z" level=info msg="ignoring event" container=8a4f891b2f49420456c0ac4f63dcbc4ff1b870b480314e84049f701543a1c1d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:31 ha-399583 dockerd[1187]: time="2025-09-29T13:19:31.051242222Z" level=info msg="ignoring event" container=6c2aec96a1a19b5b0a1ac112841a4e3b12f107c874d56c4cd9ffa6e933696aa0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:31 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:31Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/04e39e63500da1f71b6d61f057b3f5efa816d85f61b552b6cb621d1e4243c7bd/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options trust-ad ndots:0 edns0]"
	Sep 29 13:19:32 ha-399583 dockerd[1187]: time="2025-09-29T13:19:32.275605500Z" level=info msg="ignoring event" container=8cb0f155a82909e58a5e4155b29fce1a39d252e1f58821b98c8595baea1a88bb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:32 ha-399583 dockerd[1187]: time="2025-09-29T13:19:32.634371914Z" level=info msg="ignoring event" container=43b7c0b16072c37f6e6d3559eb5698c9f76cb94808a04f73835d951122fee25b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:19:32 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:19:32Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b842712a337e9e223871f9172d7e1a7055b557d1a0ebcd01d0811ba6e235565a/resolv.conf as [nameserver 192.168.49.1 search us-east-2.compute.internal options edns0 trust-ad ndots:0]"
	Sep 29 13:19:33 ha-399583 dockerd[1187]: time="2025-09-29T13:19:33.407253546Z" level=info msg="ignoring event" container=c27d8d57cfbf9403c8ac768b52321e99a3d55657784a667c457dfd2e153c2654 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Sep 29 13:20:54 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:20:54Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/bf493a34c0ac0ff83676a8c800ef381c857b42a8fa909dc64a7ad5b55598d5b0/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal options ndots:5]"
	Sep 29 13:20:56 ha-399583 cri-dockerd[1485]: time="2025-09-29T13:20:56Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	70a9591aafb8b       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   10 seconds ago       Running             busybox                   0                   bf493a34c0ac0       busybox-7b57f96db7-jwnlz
	42379890b9d92       ba04bb24b9575                                                                                         About a minute ago   Running             storage-provisioner       1                   ee9c364d50701       storage-provisioner
	9715ed50002de       138784d87c9c5                                                                                         About a minute ago   Running             coredns                   2                   b842712a337e9       coredns-66bc5c9577-5dqqj
	d674f80b2f082       138784d87c9c5                                                                                         About a minute ago   Running             coredns                   2                   04e39e63500da       coredns-66bc5c9577-p6v89
	8cb0f155a8290       138784d87c9c5                                                                                         About a minute ago   Exited              coredns                   1                   43b7c0b16072c       coredns-66bc5c9577-5dqqj
	8a4f891b2f494       138784d87c9c5                                                                                         About a minute ago   Exited              coredns                   1                   6c2aec96a1a19       coredns-66bc5c9577-p6v89
	bb51f3ad1da69       kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a              2 minutes ago        Running             kindnet-cni               0                   9218c0ec505c1       kindnet-552n5
	476e33049da20       6fc32d66c1411                                                                                         2 minutes ago        Running             kube-proxy                0                   c699a05b6ea5a       kube-proxy-s2d46
	c27d8d57cfbf9       ba04bb24b9575                                                                                         2 minutes ago        Exited              storage-provisioner       0                   ee9c364d50701       storage-provisioner
	f10714b286d96       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago        Running             kube-vip                  0                   4f7d569139668       kube-vip-ha-399583
	8726a81976510       996be7e86d9b3                                                                                         2 minutes ago        Running             kube-controller-manager   0                   cff6c86576de9       kube-controller-manager-ha-399583
	32e3ec1309ec0       a1894772a478e                                                                                         2 minutes ago        Running             etcd                      0                   3c8775165fbaf       etcd-ha-399583
	59b02d97e1876       d291939e99406                                                                                         2 minutes ago        Running             kube-apiserver            0                   6200f7fcf684c       kube-apiserver-ha-399583
	e5057f638dbe7       a25f5ef9c34c3                                                                                         2 minutes ago        Running             kube-scheduler            0                   7c9691ea056e9       kube-scheduler-ha-399583
	
	
	==> coredns [8a4f891b2f49] <==
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:48610 - 25140 "HINFO IN 7493471838613022335.9023286310770280868. udp 57 false 512" - - 0 5.00995779s
	[ERROR] plugin/errors: 2 7493471838613022335.9023286310770280868. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:52860 - 10305 "HINFO IN 7493471838613022335.9023286310770280868. udp 57 false 512" - - 0 5.000098632s
	[ERROR] plugin/errors: 2 7493471838613022335.9023286310770280868. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [8cb0f155a829] <==
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: network is unreachable
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/health: Going into lameduck mode for 5s
	[INFO] 127.0.0.1:54798 - 15184 "HINFO IN 5192265244121682960.9157686456546179351. udp 57 false 512" - - 0 5.036302766s
	[ERROR] plugin/errors: 2 5192265244121682960.9157686456546179351. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	[INFO] 127.0.0.1:38140 - 2359 "HINFO IN 5192265244121682960.9157686456546179351. udp 57 false 512" - - 0 5.005272334s
	[ERROR] plugin/errors: 2 5192265244121682960.9157686456546179351. HINFO: dial udp 192.168.49.1:53: connect: network is unreachable
	
	
	==> coredns [9715ed50002d] <==
	[INFO] 10.244.1.2:43773 - 6 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.002164339s
	[INFO] 10.244.1.3:44676 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000127574s
	[INFO] 10.244.1.3:42927 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000100308s
	[INFO] 10.244.1.3:34109 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,aa,rd,ra 126 0.000136371s
	[INFO] 10.244.0.4:49358 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 31 0.000090512s
	[INFO] 10.244.1.2:36584 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003747176s
	[INFO] 10.244.1.2:58306 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00023329s
	[INFO] 10.244.1.2:42892 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002171404s
	[INFO] 10.244.1.2:49531 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000175485s
	[INFO] 10.244.1.3:50947 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001342678s
	[INFO] 10.244.1.3:53717 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000210742s
	[INFO] 10.244.1.3:38752 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000193996s
	[INFO] 10.244.1.3:34232 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000246403s
	[INFO] 10.244.1.3:51880 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189811s
	[INFO] 10.244.0.4:58129 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014145s
	[INFO] 10.244.0.4:35718 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001338731s
	[INFO] 10.244.0.4:44633 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000182828s
	[INFO] 10.244.0.4:37431 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000133671s
	[INFO] 10.244.0.4:59341 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000218119s
	[INFO] 10.244.1.2:35029 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000150549s
	[INFO] 10.244.1.2:57163 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000068136s
	[INFO] 10.244.1.3:49247 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000145593s
	[INFO] 10.244.1.3:34936 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187702s
	[INFO] 10.244.1.3:41595 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000098307s
	[INFO] 10.244.0.4:46052 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000120789s
	
	
	==> coredns [d674f80b2f08] <==
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43817 - 1332 "HINFO IN 2110927985003271130.2061164144012676697. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022618601s
	[INFO] 10.244.1.3:37613 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000236352s
	[INFO] 10.244.1.3:47369 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,rd,ra 124 0.000851029s
	[INFO] 10.244.0.4:56432 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000184478s
	[INFO] 10.244.0.4:52692 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 89 0.019662893s
	[INFO] 10.244.0.4:34677 - 5 "PTR IN 135.186.33.3.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd,ra 124 0.000135s
	[INFO] 10.244.0.4:50968 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.001367646s
	[INFO] 10.244.1.2:35001 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000106709s
	[INFO] 10.244.1.2:58996 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00022781s
	[INFO] 10.244.1.2:52458 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145872s
	[INFO] 10.244.1.2:49841 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000169872s
	[INFO] 10.244.1.3:41156 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000116178s
	[INFO] 10.244.1.3:48480 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001146507s
	[INFO] 10.244.1.3:51760 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000223913s
	[INFO] 10.244.0.4:39844 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001029451s
	[INFO] 10.244.0.4:37614 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000242891s
	[INFO] 10.244.0.4:45422 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000125507s
	[INFO] 10.244.1.2:59758 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000161011s
	[INFO] 10.244.1.2:60717 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000170825s
	[INFO] 10.244.1.3:41909 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00019151s
	[INFO] 10.244.0.4:42587 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163891s
	[INFO] 10.244.0.4:58665 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000167805s
	[INFO] 10.244.0.4:52810 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000066758s
	
	
	==> describe nodes <==
	Name:               ha-399583
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-399583
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=ha-399583
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T13_19_00_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:18:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-399583
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:21:02 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:50 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:50 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:50 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:21:01 +0000   Mon, 29 Sep 2025 13:18:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-399583
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 3cb22907f09f471eaac8169fc8a85b65
	  System UUID:                45d3b675-cd2b-4b39-985d-76e474d341de
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-jwnlz             0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 coredns-66bc5c9577-5dqqj             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m6s
	  kube-system                 coredns-66bc5c9577-p6v89             100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     2m6s
	  kube-system                 etcd-ha-399583                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         2m8s
	  kube-system                 kindnet-552n5                        100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      2m6s
	  kube-system                 kube-apiserver-ha-399583             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 kube-controller-manager-ha-399583    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m12s
	  kube-system                 kube-proxy-s2d46                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-scheduler-ha-399583             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m9s
	  kube-system                 kube-vip-ha-399583                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m13s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             290Mi (3%)  390Mi (4%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m3s                   kube-proxy       
	  Normal   NodeAllocatableEnforced  2m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m19s (x8 over 2m19s)  kubelet          Node ha-399583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m19s (x8 over 2m19s)  kubelet          Node ha-399583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m19s (x7 over 2m19s)  kubelet          Node ha-399583 status is now: NodeHasSufficientPID
	  Normal   Starting                 2m8s                   kubelet          Starting kubelet.
	  Warning  CgroupV1                 2m8s                   kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  2m8s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  2m8s                   kubelet          Node ha-399583 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m8s                   kubelet          Node ha-399583 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m8s                   kubelet          Node ha-399583 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m7s                   node-controller  Node ha-399583 event: Registered Node ha-399583 in Controller
	  Normal   RegisteredNode           86s                    node-controller  Node ha-399583 event: Registered Node ha-399583 in Controller
	  Normal   RegisteredNode           39s                    node-controller  Node ha-399583 event: Registered Node ha-399583 in Controller
	
	
	Name:               ha-399583-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-399583-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=ha-399583
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_29T13_19_48_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:19:47 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-399583-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:20:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:47 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:47 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:47 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:20:28 +0000   Mon, 29 Sep 2025 13:19:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-399583-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 c4d265f6f5254457a0d07afe9ec8f395
	  System UUID:                6be8d3fe-d7ac-4d4e-912a-855ffd6a8a5a
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8md6f                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  default                     busybox-7b57f96db7-92l4c                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 etcd-ha-399583-m02                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         74s
	  kube-system                 kindnet-dst2d                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      80s
	  kube-system                 kube-apiserver-ha-399583-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-controller-manager-ha-399583-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-proxy-2cb75                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kube-system                 kube-scheduler-ha-399583-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-vip-ha-399583-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        71s   kube-proxy       
	  Normal  RegisteredNode  77s   node-controller  Node ha-399583-m02 event: Registered Node ha-399583-m02 in Controller
	  Normal  RegisteredNode  76s   node-controller  Node ha-399583-m02 event: Registered Node ha-399583-m02 in Controller
	  Normal  RegisteredNode  39s   node-controller  Node ha-399583-m02 event: Registered Node ha-399583-m02 in Controller
	
	
	Name:               ha-399583-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=ha-399583-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=ha-399583
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_29T13_20_31_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 13:20:30 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-399583-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 13:21:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 13:20:50 +0000   Mon, 29 Sep 2025 13:20:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-399583-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 1276f3da2509469f932c5f388a8929fd
	  System UUID:                4d61a959-d6ea-41f7-aef8-195886039d6b
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-2lt6z                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s
	  kube-system                 etcd-ha-399583-m03                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         30s
	  kube-system                 kindnet-kvb6m                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      37s
	  kube-system                 kube-apiserver-ha-399583-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-ha-399583-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-sntfr                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-scheduler-ha-399583-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-vip-ha-399583-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         30s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (1%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        15s   kube-proxy       
	  Normal  RegisteredNode  37s   node-controller  Node ha-399583-m03 event: Registered Node ha-399583-m03 in Controller
	  Normal  RegisteredNode  36s   node-controller  Node ha-399583-m03 event: Registered Node ha-399583-m03 in Controller
	  Normal  RegisteredNode  34s   node-controller  Node ha-399583-m03 event: Registered Node ha-399583-m03 in Controller
	
	
	==> dmesg <==
	[Sep29 11:47] kauditd_printk_skb: 8 callbacks suppressed
	[Sep29 12:09] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [32e3ec1309ec] <==
	{"level":"warn","ts":"2025-09-29T13:20:17.703276Z","caller":"etcdhttp/peer.go:152","msg":"failed to promote a member","member-id":"db4645cfc6f218b6","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-09-29T13:20:17.829890Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:20:17.832689Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T13:20:18.045476Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"db4645cfc6f218b6","error":"failed to dial db4645cfc6f218b6 on stream Message (peer db4645cfc6f218b6 failed to find local node aec36adc501070cc)"}
	{"level":"warn","ts":"2025-09-29T13:20:18.079557Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.149922Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55898","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:20:18.158517Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892 13899027835773194409 15800393101374265526)"}
	{"level":"info","ts":"2025-09-29T13:20:18.158925Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.159083Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.189269Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55902","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:20:18.355177Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"db4645cfc6f218b6","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-29T13:20:18.355218Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.355232Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.357904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55944","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T13:20:18.402996Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.420825Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"warn","ts":"2025-09-29T13:20:18.438694Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"db4645cfc6f218b6","error":"failed to write db4645cfc6f218b6 on stream MsgApp v2 (write tcp 192.168.49.2:2380->192.168.49.4:46836: write: broken pipe)"}
	{"level":"warn","ts":"2025-09-29T13:20:18.438970Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.473639Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"db4645cfc6f218b6","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-29T13:20:18.473899Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:18.474021Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"db4645cfc6f218b6"}
	{"level":"info","ts":"2025-09-29T13:20:29.597236Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-29T13:20:30.896477Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-29T13:20:47.099116Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-29T13:20:47.674468Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"db4645cfc6f218b6","bytes":1507865,"size":"1.5 MB","took":"30.114274637s"}
	
	
	==> kernel <==
	 13:21:07 up  5:03,  0 users,  load average: 5.12, 2.65, 2.39
	Linux ha-399583 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [bb51f3ad1da6] <==
	I0929 13:20:17.711399       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:17.711436       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:27.719270       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:27.719304       1 main.go:301] handling current node
	I0929 13:20:27.719321       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:27.719327       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:37.711568       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0929 13:20:37.711611       1 main.go:324] Node ha-399583-m03 has CIDR [10.244.2.0/24] 
	I0929 13:20:37.711863       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.168.49.4 Flags: [] Table: 0 Realm: 0} 
	I0929 13:20:37.712006       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:37.712161       1 main.go:301] handling current node
	I0929 13:20:37.712191       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:37.712203       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:47.715052       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:47.715143       1 main.go:301] handling current node
	I0929 13:20:47.715192       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:47.715219       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:47.715438       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0929 13:20:47.715454       1 main.go:324] Node ha-399583-m03 has CIDR [10.244.2.0/24] 
	I0929 13:20:57.712653       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0929 13:20:57.712688       1 main.go:301] handling current node
	I0929 13:20:57.712703       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0929 13:20:57.712709       1 main.go:324] Node ha-399583-m02 has CIDR [10.244.1.0/24] 
	I0929 13:20:57.713202       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0929 13:20:57.713225       1 main.go:324] Node ha-399583-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [59b02d97e187] <==
	I0929 13:18:55.544647       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0929 13:18:55.553138       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0929 13:18:55.554568       1 controller.go:667] quota admission added evaluator for: endpoints
	I0929 13:18:55.559782       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0929 13:18:55.737596       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0929 13:18:59.304415       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0929 13:18:59.318058       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0929 13:18:59.331512       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0929 13:19:00.997223       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0929 13:19:01.599047       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:19:01.605021       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0929 13:19:01.692438       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0929 13:20:10.474768       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 13:20:15.697182       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0929 13:20:59.214609       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36270: use of closed network connection
	E0929 13:20:59.541054       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36296: use of closed network connection
	E0929 13:20:59.812197       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36316: use of closed network connection
	E0929 13:21:00.330399       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36334: use of closed network connection
	E0929 13:21:00.616085       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36346: use of closed network connection
	E0929 13:21:01.152353       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36390: use of closed network connection
	E0929 13:21:01.447946       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36402: use of closed network connection
	E0929 13:21:01.695403       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36416: use of closed network connection
	E0929 13:21:02.014893       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36432: use of closed network connection
	E0929 13:21:02.288801       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36450: use of closed network connection
	E0929 13:21:02.531347       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:36468: use of closed network connection
	
	
	==> kube-controller-manager [8726a8197651] <==
	I0929 13:19:00.786478       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 13:19:00.789065       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0929 13:19:00.789132       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 13:19:00.789288       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 13:19:00.789309       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 13:19:00.789327       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 13:19:00.790846       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0929 13:19:00.791971       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 13:19:00.797206       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 13:19:00.799647       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 13:19:00.800698       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 13:19:00.808548       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 13:19:00.808814       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:19:00.816985       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 13:19:00.847055       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 13:19:00.847083       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 13:19:00.847092       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 13:19:47.677665       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-399583-m02\" does not exist"
	I0929 13:19:47.694520       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-399583-m02" podCIDRs=["10.244.1.0/24"]
	I0929 13:19:50.797654       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-399583-m02"
	E0929 13:20:29.604263       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-w87hh failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-w87hh\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	E0929 13:20:29.608056       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-w87hh failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-w87hh\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0929 13:20:30.300260       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-399583-m03\" does not exist"
	I0929 13:20:30.377623       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-399583-m03" podCIDRs=["10.244.2.0/24"]
	I0929 13:20:30.820218       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-399583-m03"
	
	
	==> kube-proxy [476e33049da2] <==
	I0929 13:19:03.199821       1 server_linux.go:53] "Using iptables proxy"
	I0929 13:19:03.294226       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 13:19:03.394460       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 13:19:03.394502       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0929 13:19:03.394589       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 13:19:03.504967       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 13:19:03.505029       1 server_linux.go:132] "Using iptables Proxier"
	I0929 13:19:03.532787       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 13:19:03.533149       1 server.go:527] "Version info" version="v1.34.0"
	I0929 13:19:03.533166       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 13:19:03.534951       1 config.go:200] "Starting service config controller"
	I0929 13:19:03.534962       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 13:19:03.534997       1 config.go:106] "Starting endpoint slice config controller"
	I0929 13:19:03.535002       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 13:19:03.535014       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 13:19:03.535018       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 13:19:03.539569       1 config.go:309] "Starting node config controller"
	I0929 13:19:03.539584       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 13:19:03.539592       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 13:19:03.640616       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 13:19:03.640653       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 13:19:03.640699       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [e5057f638dbe] <==
	E0929 13:18:54.475641       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 13:18:54.475689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 13:18:54.475743       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 13:18:54.475832       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 13:18:54.475967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 13:18:54.476015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 13:18:54.476078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 13:18:54.481255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 13:18:54.481334       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 13:18:54.481399       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 13:18:54.481713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 13:18:54.486612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 13:18:54.486806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 13:18:54.488764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 13:18:55.283737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0929 13:18:57.358532       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0929 13:20:30.505311       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-cpdlp\": pod kube-proxy-cpdlp is already assigned to node \"ha-399583-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-cpdlp" node="ha-399583-m03"
	E0929 13:20:30.506253       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 9ba5e634-5db2-4592-98d3-cd8afa30cf47(kube-system/kube-proxy-cpdlp) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-cpdlp"
	E0929 13:20:30.506353       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-cpdlp\": pod kube-proxy-cpdlp is already assigned to node \"ha-399583-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-cpdlp"
	I0929 13:20:30.507729       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-cpdlp" node="ha-399583-m03"
	I0929 13:20:53.296078       1 cache.go:512] "Pod was added to a different node than it was assumed" podKey="163889eb-aeae-4a84-8222-859102d02ec1" pod="default/busybox-7b57f96db7-92l4c" assumedNode="ha-399583-m02" currentNode="ha-399583"
	E0929 13:20:53.352858       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-92l4c\": pod busybox-7b57f96db7-92l4c is already assigned to node \"ha-399583-m02\"" plugin="DefaultBinder" pod="default/busybox-7b57f96db7-92l4c" node="ha-399583"
	E0929 13:20:53.354238       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 163889eb-aeae-4a84-8222-859102d02ec1(default/busybox-7b57f96db7-92l4c) was assumed on ha-399583 but assigned to ha-399583-m02" logger="UnhandledError" pod="default/busybox-7b57f96db7-92l4c"
	E0929 13:20:53.354478       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-7b57f96db7-92l4c\": pod busybox-7b57f96db7-92l4c is already assigned to node \"ha-399583-m02\"" logger="UnhandledError" pod="default/busybox-7b57f96db7-92l4c"
	I0929 13:20:53.356051       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-7b57f96db7-92l4c" node="ha-399583-m02"
	
	
	==> kubelet <==
	Sep 29 13:19:01 ha-399583 kubelet[2463]: I0929 13:19:01.941401    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab-tmp\") pod \"storage-provisioner\" (UID: \"5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab\") " pod="kube-system/storage-provisioner"
	Sep 29 13:19:01 ha-399583 kubelet[2463]: I0929 13:19:01.942717    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhcwh\" (UniqueName: \"kubernetes.io/projected/5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab-kube-api-access-qhcwh\") pod \"storage-provisioner\" (UID: \"5b4eeec2-2667-4b46-a2f7-6e5fd35bcbab\") " pod="kube-system/storage-provisioner"
	Sep 29 13:19:01 ha-399583 kubelet[2463]: I0929 13:19:01.966702    2463 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.046974    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f0fb99f-7e4a-493f-b70f-40f31bcab4d4-config-volume\") pod \"coredns-66bc5c9577-5dqqj\" (UID: \"8f0fb99f-7e4a-493f-b70f-40f31bcab4d4\") " pod="kube-system/coredns-66bc5c9577-5dqqj"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.047048    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klnx2\" (UniqueName: \"kubernetes.io/projected/8f0fb99f-7e4a-493f-b70f-40f31bcab4d4-kube-api-access-klnx2\") pod \"coredns-66bc5c9577-5dqqj\" (UID: \"8f0fb99f-7e4a-493f-b70f-40f31bcab4d4\") " pod="kube-system/coredns-66bc5c9577-5dqqj"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.147537    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3dba7282-54c9-4cf8-acd8-64548b982b4e-config-volume\") pod \"coredns-66bc5c9577-p6v89\" (UID: \"3dba7282-54c9-4cf8-acd8-64548b982b4e\") " pod="kube-system/coredns-66bc5c9577-p6v89"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.147614    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gsw2\" (UniqueName: \"kubernetes.io/projected/3dba7282-54c9-4cf8-acd8-64548b982b4e-kube-api-access-5gsw2\") pod \"coredns-66bc5c9577-p6v89\" (UID: \"3dba7282-54c9-4cf8-acd8-64548b982b4e\") " pod="kube-system/coredns-66bc5c9577-p6v89"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.600210    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9218c0ec505c1057217ae4b2feb723de8d7840bad6ef2c8e380e65980791b749"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.611142    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c699a05b6ea5ac4476f9801aec167be528e8c574c0bba66e213422d433e1dfb5"
	Sep 29 13:19:02 ha-399583 kubelet[2463]: I0929 13:19:02.661525    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ee9c364d5070171840acea461541085340472795fb2ec67d14b11b3ffe769fed"
	Sep 29 13:19:03 ha-399583 kubelet[2463]: I0929 13:19:03.912260    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=2.912239149 podStartE2EDuration="2.912239149s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:03.911960943 +0000 UTC m=+4.815601881" watchObservedRunningTime="2025-09-29 13:19:03.912239149 +0000 UTC m=+4.815880079"
	Sep 29 13:19:04 ha-399583 kubelet[2463]: I0929 13:19:04.291473    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-p6v89" podStartSLOduration=3.291451462 podStartE2EDuration="3.291451462s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:04.145831901 +0000 UTC m=+5.049472831" watchObservedRunningTime="2025-09-29 13:19:04.291451462 +0000 UTC m=+5.195092392"
	Sep 29 13:19:04 ha-399583 kubelet[2463]: I0929 13:19:04.958480    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s2d46" podStartSLOduration=3.958448969 podStartE2EDuration="3.958448969s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:04.292620385 +0000 UTC m=+5.196261324" watchObservedRunningTime="2025-09-29 13:19:04.958448969 +0000 UTC m=+5.862089899"
	Sep 29 13:19:05 ha-399583 kubelet[2463]: I0929 13:19:05.415830    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5dqqj" podStartSLOduration=4.415802841 podStartE2EDuration="4.415802841s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-29 13:19:05.381612865 +0000 UTC m=+6.285253812" watchObservedRunningTime="2025-09-29 13:19:05.415802841 +0000 UTC m=+6.319443763"
	Sep 29 13:19:08 ha-399583 kubelet[2463]: I0929 13:19:08.458326    2463 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-552n5" podStartSLOduration=3.736339255 podStartE2EDuration="7.458306005s" podCreationTimestamp="2025-09-29 13:19:01 +0000 UTC" firstStartedPulling="2025-09-29 13:19:02.612284284 +0000 UTC m=+3.515925214" lastFinishedPulling="2025-09-29 13:19:06.334251034 +0000 UTC m=+7.237891964" observedRunningTime="2025-09-29 13:19:08.456921826 +0000 UTC m=+9.360562756" watchObservedRunningTime="2025-09-29 13:19:08.458306005 +0000 UTC m=+9.361946935"
	Sep 29 13:19:09 ha-399583 kubelet[2463]: I0929 13:19:09.612760    2463 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 29 13:19:09 ha-399583 kubelet[2463]: I0929 13:19:09.617819    2463 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 29 13:19:17 ha-399583 kubelet[2463]: I0929 13:19:17.682336    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e602a051efa9808202ff5e0a632206364d9f55dc499d3b6560233b6b121e69c"
	Sep 29 13:19:18 ha-399583 kubelet[2463]: I0929 13:19:18.734101    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="053eae7f968bc8920259052b979365028efdf5b6724575a3a95323877965773b"
	Sep 29 13:19:31 ha-399583 kubelet[2463]: I0929 13:19:31.279473    2463 scope.go:117] "RemoveContainer" containerID="f9a485d796f1697bb95b77b506d6d7d33a25885377c6842c14c0361eeaa21499"
	Sep 29 13:19:32 ha-399583 kubelet[2463]: I0929 13:19:32.418725    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6c2aec96a1a19b5b0a1ac112841a4e3b12f107c874d56c4cd9ffa6e933696aa0"
	Sep 29 13:19:33 ha-399583 kubelet[2463]: I0929 13:19:33.476964    2463 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="43b7c0b16072c37f6e6d3559eb5698c9f76cb94808a04f73835d951122fee25b"
	Sep 29 13:19:33 ha-399583 kubelet[2463]: I0929 13:19:33.477022    2463 scope.go:117] "RemoveContainer" containerID="e1ae11a45d2ff19e6c97670cfafd46212633ec26395d6693473ad110b077e269"
	Sep 29 13:19:34 ha-399583 kubelet[2463]: I0929 13:19:34.511475    2463 scope.go:117] "RemoveContainer" containerID="c27d8d57cfbf9403c8ac768b52321e99a3d55657784a667c457dfd2e153c2654"
	Sep 29 13:20:53 ha-399583 kubelet[2463]: I0929 13:20:53.531890    2463 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mkrvl\" (UniqueName: \"kubernetes.io/projected/32a441ef-2e7d-4ea5-9e66-94d19d0b14be-kube-api-access-mkrvl\") pod \"busybox-7b57f96db7-jwnlz\" (UID: \"32a441ef-2e7d-4ea5-9e66-94d19d0b14be\") " pod="default/busybox-7b57f96db7-jwnlz"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p ha-399583 -n ha-399583
helpers_test.go:269: (dbg) Run:  kubectl --context ha-399583 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (3.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (288.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker
E0929 14:02:50.246202 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p calico-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=docker: exit status 80 (4m48.834055994s)

                                                
                                                
-- stdout --
	* [calico-212797] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	* Using Docker driver with root privileges
	* Starting "calico-212797" primary control-plane node in "calico-212797" cluster
	* Pulling base image v0.0.48 ...
	* Configuring Calico (Container Networking Interface) ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 14:02:39.946042 1443614 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:02:39.946214 1443614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:02:39.946242 1443614 out.go:374] Setting ErrFile to fd 2...
	I0929 14:02:39.946263 1443614 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:02:39.946517 1443614 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:02:39.946974 1443614 out.go:368] Setting JSON to false
	I0929 14:02:39.947981 1443614 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":20712,"bootTime":1759133848,"procs":188,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:02:39.948079 1443614 start.go:140] virtualization:  
	I0929 14:02:39.953627 1443614 out.go:179] * [calico-212797] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:02:39.956916 1443614 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:02:39.956994 1443614 notify.go:220] Checking for updates...
	I0929 14:02:39.963199 1443614 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:02:39.966241 1443614 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:02:39.969370 1443614 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:02:39.972298 1443614 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:02:39.975264 1443614 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:02:39.978788 1443614 config.go:182] Loaded profile config "kubernetes-upgrade-710674": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:02:39.978908 1443614 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:02:40.000169 1443614 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:02:40.000298 1443614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:02:40.078180 1443614 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:02:40.068249743 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:02:40.078299 1443614 docker.go:318] overlay module found
	I0929 14:02:40.081441 1443614 out.go:179] * Using the docker driver based on user configuration
	I0929 14:02:40.084269 1443614 start.go:304] selected driver: docker
	I0929 14:02:40.084298 1443614 start.go:924] validating driver "docker" against <nil>
	I0929 14:02:40.084315 1443614 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:02:40.085178 1443614 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:02:40.141479 1443614 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:02:40.132235675 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:02:40.141629 1443614 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 14:02:40.141866 1443614 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:02:40.144728 1443614 out.go:179] * Using Docker driver with root privileges
	I0929 14:02:40.147676 1443614 cni.go:84] Creating CNI manager for "calico"
	I0929 14:02:40.147696 1443614 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I0929 14:02:40.147788 1443614 start.go:348] cluster config:
	{Name:calico-212797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-212797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netwo
rkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInt
erval:1m0s}
	I0929 14:02:40.151006 1443614 out.go:179] * Starting "calico-212797" primary control-plane node in "calico-212797" cluster
	I0929 14:02:40.153946 1443614 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:02:40.157010 1443614 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:02:40.159993 1443614 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:02:40.160080 1443614 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:02:40.160087 1443614 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 14:02:40.160104 1443614 cache.go:58] Caching tarball of preloaded images
	I0929 14:02:40.160205 1443614 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 14:02:40.160215 1443614 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 14:02:40.160333 1443614 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/config.json ...
	I0929 14:02:40.160364 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/config.json: {Name:mk30ec96a8c632922bbc56eab89fb2766db859cf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:40.179570 1443614 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:02:40.179607 1443614 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:02:40.179628 1443614 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:02:40.179660 1443614 start.go:360] acquireMachinesLock for calico-212797: {Name:mkf80555d9b5c926f3c37d7a0a0185b66c5f030f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:02:40.179782 1443614 start.go:364] duration metric: took 99.455µs to acquireMachinesLock for "calico-212797"
	I0929 14:02:40.179813 1443614 start.go:93] Provisioning new machine with config: &{Name:calico-212797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-212797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:02:40.179882 1443614 start.go:125] createHost starting for "" (driver="docker")
	I0929 14:02:40.183194 1443614 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 14:02:40.183452 1443614 start.go:159] libmachine.API.Create for "calico-212797" (driver="docker")
	I0929 14:02:40.183488 1443614 client.go:168] LocalClient.Create starting
	I0929 14:02:40.183564 1443614 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 14:02:40.183606 1443614 main.go:141] libmachine: Decoding PEM data...
	I0929 14:02:40.183625 1443614 main.go:141] libmachine: Parsing certificate...
	I0929 14:02:40.183693 1443614 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 14:02:40.183713 1443614 main.go:141] libmachine: Decoding PEM data...
	I0929 14:02:40.183729 1443614 main.go:141] libmachine: Parsing certificate...
	I0929 14:02:40.184100 1443614 cli_runner.go:164] Run: docker network inspect calico-212797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 14:02:40.201227 1443614 cli_runner.go:211] docker network inspect calico-212797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 14:02:40.201313 1443614 network_create.go:284] running [docker network inspect calico-212797] to gather additional debugging logs...
	I0929 14:02:40.201334 1443614 cli_runner.go:164] Run: docker network inspect calico-212797
	W0929 14:02:40.217207 1443614 cli_runner.go:211] docker network inspect calico-212797 returned with exit code 1
	I0929 14:02:40.217238 1443614 network_create.go:287] error running [docker network inspect calico-212797]: docker network inspect calico-212797: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-212797 not found
	I0929 14:02:40.217253 1443614 network_create.go:289] output of [docker network inspect calico-212797]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-212797 not found
	
	** /stderr **
	I0929 14:02:40.217352 1443614 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:02:40.235330 1443614 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85cc826cc833 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e6:9d:b6:86:22:ad} reservation:<nil>}
	I0929 14:02:40.235613 1443614 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-aee8219e46ea IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:48:5c:79:e0:92} reservation:<nil>}
	I0929 14:02:40.235866 1443614 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-415857c413ae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:0e:aa:55:e2:18} reservation:<nil>}
	I0929 14:02:40.236318 1443614 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b23c0}
	I0929 14:02:40.236342 1443614 network_create.go:124] attempt to create docker network calico-212797 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0929 14:02:40.236404 1443614 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-212797 calico-212797
	I0929 14:02:40.299658 1443614 network_create.go:108] docker network calico-212797 192.168.76.0/24 created
	I0929 14:02:40.299693 1443614 kic.go:121] calculated static IP "192.168.76.2" for the "calico-212797" container
	I0929 14:02:40.299776 1443614 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 14:02:40.316115 1443614 cli_runner.go:164] Run: docker volume create calico-212797 --label name.minikube.sigs.k8s.io=calico-212797 --label created_by.minikube.sigs.k8s.io=true
	I0929 14:02:40.335300 1443614 oci.go:103] Successfully created a docker volume calico-212797
	I0929 14:02:40.335397 1443614 cli_runner.go:164] Run: docker run --rm --name calico-212797-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-212797 --entrypoint /usr/bin/test -v calico-212797:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 14:02:40.939124 1443614 oci.go:107] Successfully prepared a docker volume calico-212797
	I0929 14:02:40.939171 1443614 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:02:40.939192 1443614 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 14:02:40.939266 1443614 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-212797:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 14:02:44.927222 1443614 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v calico-212797:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (3.987921026s)
	I0929 14:02:44.927264 1443614 kic.go:203] duration metric: took 3.988067333s to extract preloaded images to volume ...
	W0929 14:02:44.927412 1443614 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 14:02:44.927528 1443614 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 14:02:44.996886 1443614 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-212797 --name calico-212797 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-212797 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-212797 --network calico-212797 --ip 192.168.76.2 --volume calico-212797:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 14:02:45.480923 1443614 cli_runner.go:164] Run: docker container inspect calico-212797 --format={{.State.Running}}
	I0929 14:02:45.502242 1443614 cli_runner.go:164] Run: docker container inspect calico-212797 --format={{.State.Status}}
	I0929 14:02:45.526500 1443614 cli_runner.go:164] Run: docker exec calico-212797 stat /var/lib/dpkg/alternatives/iptables
	I0929 14:02:45.576004 1443614 oci.go:144] the created container "calico-212797" has a running status.
	I0929 14:02:45.576044 1443614 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa...
	I0929 14:02:47.156859 1443614 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 14:02:47.178392 1443614 cli_runner.go:164] Run: docker container inspect calico-212797 --format={{.State.Status}}
	I0929 14:02:47.203171 1443614 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 14:02:47.203191 1443614 kic_runner.go:114] Args: [docker exec --privileged calico-212797 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 14:02:47.251119 1443614 cli_runner.go:164] Run: docker container inspect calico-212797 --format={{.State.Status}}
	I0929 14:02:47.274049 1443614 machine.go:93] provisionDockerMachine start ...
	I0929 14:02:47.274142 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:47.300668 1443614 main.go:141] libmachine: Using SSH client type: native
	I0929 14:02:47.301000 1443614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I0929 14:02:47.301010 1443614 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:02:47.444231 1443614 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-212797
	
	I0929 14:02:47.444256 1443614 ubuntu.go:182] provisioning hostname "calico-212797"
	I0929 14:02:47.444328 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:47.466862 1443614 main.go:141] libmachine: Using SSH client type: native
	I0929 14:02:47.467169 1443614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I0929 14:02:47.467186 1443614 main.go:141] libmachine: About to run SSH command:
	sudo hostname calico-212797 && echo "calico-212797" | sudo tee /etc/hostname
	I0929 14:02:47.631553 1443614 main.go:141] libmachine: SSH cmd err, output: <nil>: calico-212797
	
	I0929 14:02:47.631638 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:47.661918 1443614 main.go:141] libmachine: Using SSH client type: native
	I0929 14:02:47.662217 1443614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I0929 14:02:47.662234 1443614 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-212797' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-212797/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-212797' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:02:47.818092 1443614 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:02:47.818121 1443614 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:02:47.818175 1443614 ubuntu.go:190] setting up certificates
	I0929 14:02:47.818192 1443614 provision.go:84] configureAuth start
	I0929 14:02:47.818269 1443614 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-212797
	I0929 14:02:47.841681 1443614 provision.go:143] copyHostCerts
	I0929 14:02:47.841757 1443614 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:02:47.841772 1443614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:02:47.841853 1443614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:02:47.841952 1443614 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:02:47.841964 1443614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:02:47.841993 1443614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:02:47.842051 1443614 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:02:47.842060 1443614 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:02:47.842085 1443614 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:02:47.842134 1443614 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.calico-212797 san=[127.0.0.1 192.168.76.2 calico-212797 localhost minikube]
	I0929 14:02:48.744233 1443614 provision.go:177] copyRemoteCerts
	I0929 14:02:48.744305 1443614 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:02:48.744356 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:48.760828 1443614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa Username:docker}
	I0929 14:02:48.861639 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:02:48.885776 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0929 14:02:48.910442 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 14:02:48.934678 1443614 provision.go:87] duration metric: took 1.116470609s to configureAuth
	I0929 14:02:48.934748 1443614 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:02:48.934955 1443614 config.go:182] Loaded profile config "calico-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:02:48.935013 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:48.951918 1443614 main.go:141] libmachine: Using SSH client type: native
	I0929 14:02:48.952220 1443614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I0929 14:02:48.952239 1443614 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:02:49.097011 1443614 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:02:49.097030 1443614 ubuntu.go:71] root file system type: overlay
	I0929 14:02:49.097150 1443614 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:02:49.097219 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:49.114927 1443614 main.go:141] libmachine: Using SSH client type: native
	I0929 14:02:49.115237 1443614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I0929 14:02:49.115327 1443614 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:02:49.268805 1443614 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:02:49.268888 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:49.288232 1443614 main.go:141] libmachine: Using SSH client type: native
	I0929 14:02:49.288550 1443614 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34238 <nil> <nil>}
	I0929 14:02:49.288573 1443614 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:02:50.156196 1443614 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 14:02:49.260777386 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 14:02:50.156289 1443614 machine.go:96] duration metric: took 2.882220549s to provisionDockerMachine
	I0929 14:02:50.156315 1443614 client.go:171] duration metric: took 9.972819746s to LocalClient.Create
	I0929 14:02:50.156366 1443614 start.go:167] duration metric: took 9.972915648s to libmachine.API.Create "calico-212797"
	I0929 14:02:50.156393 1443614 start.go:293] postStartSetup for "calico-212797" (driver="docker")
	I0929 14:02:50.156418 1443614 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:02:50.156604 1443614 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:02:50.156673 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:50.173809 1443614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa Username:docker}
	I0929 14:02:50.273504 1443614 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:02:50.276568 1443614 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:02:50.276599 1443614 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:02:50.276653 1443614 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:02:50.276658 1443614 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:02:50.276669 1443614 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:02:50.276721 1443614 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:02:50.276798 1443614 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:02:50.276895 1443614 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:02:50.285425 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:02:50.309210 1443614 start.go:296] duration metric: took 152.78256ms for postStartSetup
	I0929 14:02:50.309565 1443614 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-212797
	I0929 14:02:50.336002 1443614 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/config.json ...
	I0929 14:02:50.336262 1443614 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:02:50.336302 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:50.355800 1443614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa Username:docker}
	I0929 14:02:50.453549 1443614 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:02:50.458342 1443614 start.go:128] duration metric: took 10.278443244s to createHost
	I0929 14:02:50.458364 1443614 start.go:83] releasing machines lock for "calico-212797", held for 10.278569998s
	I0929 14:02:50.458440 1443614 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-212797
	I0929 14:02:50.475205 1443614 ssh_runner.go:195] Run: cat /version.json
	I0929 14:02:50.475268 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:50.475575 1443614 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:02:50.475646 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:02:50.496124 1443614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa Username:docker}
	I0929 14:02:50.516020 1443614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa Username:docker}
	I0929 14:02:50.596064 1443614 ssh_runner.go:195] Run: systemctl --version
	I0929 14:02:50.721518 1443614 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:02:50.726048 1443614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:02:50.752819 1443614 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:02:50.752895 1443614 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:02:50.785459 1443614 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 14:02:50.785487 1443614 start.go:495] detecting cgroup driver to use...
	I0929 14:02:50.785520 1443614 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:02:50.785624 1443614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:02:50.803252 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:02:50.813693 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:02:50.826385 1443614 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:02:50.826462 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:02:50.837466 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:02:50.848223 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:02:50.858715 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:02:50.869681 1443614 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:02:50.879034 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:02:50.889716 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:02:50.899700 1443614 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:02:50.910288 1443614 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:02:50.919074 1443614 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:02:50.927791 1443614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:02:51.042039 1443614 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:02:51.178538 1443614 start.go:495] detecting cgroup driver to use...
	I0929 14:02:51.178585 1443614 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:02:51.178691 1443614 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:02:51.199837 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:02:51.214470 1443614 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:02:51.256160 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:02:51.273161 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:02:51.291869 1443614 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:02:51.314109 1443614 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:02:51.318152 1443614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:02:51.337165 1443614 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:02:51.359755 1443614 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:02:51.469595 1443614 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:02:51.571529 1443614 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:02:51.571650 1443614 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:02:51.599413 1443614 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:02:51.613005 1443614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:02:51.720548 1443614 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:02:52.238093 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:02:52.251122 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:02:52.264821 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:02:52.278263 1443614 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:02:52.392021 1443614 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:02:52.503467 1443614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:02:52.620444 1443614 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:02:52.639689 1443614 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:02:52.652803 1443614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:02:52.762391 1443614 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:02:52.867120 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:02:52.883105 1443614 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:02:52.883295 1443614 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:02:52.890377 1443614 start.go:563] Will wait 60s for crictl version
	I0929 14:02:52.890512 1443614 ssh_runner.go:195] Run: which crictl
	I0929 14:02:52.894487 1443614 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:02:52.950325 1443614 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:02:52.950464 1443614 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:02:52.991760 1443614 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:02:53.024994 1443614 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:02:53.025104 1443614 cli_runner.go:164] Run: docker network inspect calico-212797 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:02:53.041843 1443614 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:02:53.046087 1443614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:02:53.061099 1443614 kubeadm.go:875] updating cluster {Name:calico-212797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-212797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPat
h: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:02:53.061220 1443614 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:02:53.061289 1443614 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:02:53.082914 1443614 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 14:02:53.082938 1443614 docker.go:621] Images already preloaded, skipping extraction
	I0929 14:02:53.083008 1443614 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:02:53.103773 1443614 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 14:02:53.103797 1443614 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:02:53.103808 1443614 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 docker true true} ...
	I0929 14:02:53.103906 1443614 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=calico-212797 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:calico-212797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I0929 14:02:53.103979 1443614 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:02:53.198082 1443614 cni.go:84] Creating CNI manager for "calico"
	I0929 14:02:53.198107 1443614 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:02:53.198130 1443614 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-212797 NodeName:calico-212797 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:02:53.198265 1443614 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "calico-212797"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:02:53.198336 1443614 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:02:53.208622 1443614 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:02:53.208696 1443614 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:02:53.218741 1443614 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0929 14:02:53.240668 1443614 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:02:53.261209 1443614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I0929 14:02:53.281281 1443614 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:02:53.285068 1443614 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:02:53.296715 1443614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:02:53.413070 1443614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:02:53.429528 1443614 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797 for IP: 192.168.76.2
	I0929 14:02:53.429551 1443614 certs.go:194] generating shared ca certs ...
	I0929 14:02:53.429568 1443614 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:53.429694 1443614 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:02:53.429741 1443614 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:02:53.429752 1443614 certs.go:256] generating profile certs ...
	I0929 14:02:53.429807 1443614 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/client.key
	I0929 14:02:53.429823 1443614 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/client.crt with IP's: []
	I0929 14:02:53.772979 1443614 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/client.crt ...
	I0929 14:02:53.773011 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/client.crt: {Name:mk892e255ef668961bd62d7c2dd2ebdadb9de8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:53.773239 1443614 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/client.key ...
	I0929 14:02:53.773254 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/client.key: {Name:mk818ea7079873b68e5ad05b603355fe039f9903 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:53.773362 1443614 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.key.e8d2d263
	I0929 14:02:53.773381 1443614 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.crt.e8d2d263 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0929 14:02:54.542258 1443614 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.crt.e8d2d263 ...
	I0929 14:02:54.542294 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.crt.e8d2d263: {Name:mk54be09cdd6a947c73d1ab0e7f39829b99d586b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:54.542465 1443614 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.key.e8d2d263 ...
	I0929 14:02:54.542482 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.key.e8d2d263: {Name:mk6639f1299dfbc6f44599c553122d3de27a3765 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:54.542557 1443614 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.crt.e8d2d263 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.crt
	I0929 14:02:54.542646 1443614 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.key.e8d2d263 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.key
	I0929 14:02:54.542706 1443614 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.key
	I0929 14:02:54.542726 1443614 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.crt with IP's: []
	I0929 14:02:55.393654 1443614 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.crt ...
	I0929 14:02:55.393684 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.crt: {Name:mk18f72a335f24229414688a0cba8cd0a289882c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:55.393896 1443614 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.key ...
	I0929 14:02:55.393912 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.key: {Name:mk4b1f27eb6a56877e26ffe89f563a6990bed0a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:02:55.394111 1443614 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:02:55.394147 1443614 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:02:55.394156 1443614 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:02:55.394182 1443614 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:02:55.394204 1443614 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:02:55.394234 1443614 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:02:55.394274 1443614 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:02:55.405822 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:02:55.431894 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:02:55.469879 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:02:55.504968 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:02:55.539835 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0929 14:02:55.575324 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 14:02:55.602944 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:02:55.628217 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/calico-212797/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0929 14:02:55.660881 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:02:55.686169 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:02:55.711549 1443614 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:02:55.735937 1443614 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:02:55.754338 1443614 ssh_runner.go:195] Run: openssl version
	I0929 14:02:55.760631 1443614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:02:55.770628 1443614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:02:55.774965 1443614 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:02:55.775085 1443614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:02:55.782950 1443614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:02:55.793774 1443614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:02:55.810435 1443614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:02:55.814645 1443614 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:02:55.814733 1443614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:02:55.822196 1443614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:02:55.835419 1443614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:02:55.844886 1443614 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:02:55.848884 1443614 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:02:55.848974 1443614 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:02:55.856467 1443614 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:02:55.865793 1443614 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:02:55.870333 1443614 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 14:02:55.870412 1443614 kubeadm.go:392] StartCluster: {Name:calico-212797 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:calico-212797 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:02:55.870569 1443614 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:02:55.888843 1443614 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:02:55.899540 1443614 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 14:02:55.908482 1443614 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 14:02:55.908587 1443614 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 14:02:55.919996 1443614 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 14:02:55.920018 1443614 kubeadm.go:157] found existing configuration files:
	
	I0929 14:02:55.920098 1443614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 14:02:55.929625 1443614 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 14:02:55.929710 1443614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 14:02:55.938633 1443614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 14:02:55.948611 1443614 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 14:02:55.948730 1443614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 14:02:55.958490 1443614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 14:02:55.968612 1443614 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 14:02:55.968680 1443614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 14:02:55.977818 1443614 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 14:02:55.987653 1443614 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 14:02:55.987715 1443614 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 14:02:55.997761 1443614 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 14:02:56.051893 1443614 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 14:02:56.052056 1443614 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 14:02:56.099085 1443614 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 14:02:56.099177 1443614 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 14:02:56.099238 1443614 kubeadm.go:310] OS: Linux
	I0929 14:02:56.099304 1443614 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 14:02:56.099368 1443614 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 14:02:56.099435 1443614 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 14:02:56.099497 1443614 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 14:02:56.099560 1443614 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 14:02:56.099629 1443614 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 14:02:56.099681 1443614 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 14:02:56.099754 1443614 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 14:02:56.099818 1443614 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 14:02:56.206481 1443614 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 14:02:56.206602 1443614 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 14:02:56.206705 1443614 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 14:02:56.220552 1443614 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 14:02:56.226580 1443614 out.go:252]   - Generating certificates and keys ...
	I0929 14:02:56.226677 1443614 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 14:02:56.226751 1443614 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 14:02:56.457087 1443614 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 14:02:56.704937 1443614 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 14:02:56.990815 1443614 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 14:02:57.149019 1443614 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 14:02:57.562553 1443614 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 14:02:57.563229 1443614 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [calico-212797 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 14:02:59.164834 1443614 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 14:02:59.164976 1443614 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [calico-212797 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0929 14:02:59.300854 1443614 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 14:03:00.371585 1443614 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 14:03:00.784837 1443614 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 14:03:00.788765 1443614 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 14:03:01.000934 1443614 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 14:03:01.590697 1443614 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 14:03:03.060252 1443614 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 14:03:03.618937 1443614 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 14:03:04.005629 1443614 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 14:03:04.006894 1443614 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 14:03:04.010297 1443614 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 14:03:04.013579 1443614 out.go:252]   - Booting up control plane ...
	I0929 14:03:04.013692 1443614 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 14:03:04.015652 1443614 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 14:03:04.017642 1443614 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 14:03:04.034030 1443614 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 14:03:04.034144 1443614 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 14:03:04.043596 1443614 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 14:03:04.043701 1443614 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 14:03:04.043743 1443614 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 14:03:04.189951 1443614 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 14:03:04.190080 1443614 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 14:03:06.192030 1443614 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.001308324s
	I0929 14:03:06.195861 1443614 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 14:03:06.195962 1443614 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I0929 14:03:06.196281 1443614 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 14:03:06.196372 1443614 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 14:03:16.496216 1443614 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 10.299818575s
	I0929 14:03:17.573194 1443614 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 11.377243611s
	I0929 14:03:19.706426 1443614 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 13.510519996s
	I0929 14:03:19.730953 1443614 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 14:03:19.745684 1443614 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 14:03:19.761761 1443614 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 14:03:19.761974 1443614 kubeadm.go:310] [mark-control-plane] Marking the node calico-212797 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 14:03:19.777921 1443614 kubeadm.go:310] [bootstrap-token] Using token: nr1h96.wrjlx40ue3y357er
	I0929 14:03:19.780864 1443614 out.go:252]   - Configuring RBAC rules ...
	I0929 14:03:19.780997 1443614 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 14:03:19.785389 1443614 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 14:03:19.794652 1443614 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 14:03:19.799507 1443614 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 14:03:19.804571 1443614 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 14:03:19.810910 1443614 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 14:03:20.115882 1443614 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 14:03:20.553210 1443614 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 14:03:21.114150 1443614 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 14:03:21.115637 1443614 kubeadm.go:310] 
	I0929 14:03:21.115739 1443614 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 14:03:21.115755 1443614 kubeadm.go:310] 
	I0929 14:03:21.115845 1443614 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 14:03:21.115855 1443614 kubeadm.go:310] 
	I0929 14:03:21.115888 1443614 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 14:03:21.115954 1443614 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 14:03:21.116011 1443614 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 14:03:21.116020 1443614 kubeadm.go:310] 
	I0929 14:03:21.116076 1443614 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 14:03:21.116085 1443614 kubeadm.go:310] 
	I0929 14:03:21.116136 1443614 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 14:03:21.116144 1443614 kubeadm.go:310] 
	I0929 14:03:21.116200 1443614 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 14:03:21.116282 1443614 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 14:03:21.116357 1443614 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 14:03:21.116366 1443614 kubeadm.go:310] 
	I0929 14:03:21.116454 1443614 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 14:03:21.116577 1443614 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 14:03:21.116591 1443614 kubeadm.go:310] 
	I0929 14:03:21.116679 1443614 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token nr1h96.wrjlx40ue3y357er \
	I0929 14:03:21.116791 1443614 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b \
	I0929 14:03:21.116817 1443614 kubeadm.go:310] 	--control-plane 
	I0929 14:03:21.116825 1443614 kubeadm.go:310] 
	I0929 14:03:21.116914 1443614 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 14:03:21.116922 1443614 kubeadm.go:310] 
	I0929 14:03:21.117008 1443614 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token nr1h96.wrjlx40ue3y357er \
	I0929 14:03:21.117117 1443614 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b 
	I0929 14:03:21.121177 1443614 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 14:03:21.121428 1443614 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 14:03:21.121542 1443614 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 14:03:21.121566 1443614 cni.go:84] Creating CNI manager for "calico"
	I0929 14:03:21.125025 1443614 out.go:179] * Configuring Calico (Container Networking Interface) ...
	I0929 14:03:21.128071 1443614 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0929 14:03:21.128101 1443614 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (539470 bytes)
	I0929 14:03:21.153027 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0929 14:03:22.980484 1443614 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.827422987s)
	I0929 14:03:22.980565 1443614 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 14:03:22.980687 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:03:22.980699 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes calico-212797 minikube.k8s.io/updated_at=2025_09_29T14_03_22_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=calico-212797 minikube.k8s.io/primary=true
	I0929 14:03:23.168634 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:03:23.168699 1443614 ops.go:34] apiserver oom_adj: -16
	I0929 14:03:23.669373 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:03:24.169388 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:03:24.669738 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:03:25.169557 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:03:25.668896 1443614 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:03:25.852618 1443614 kubeadm.go:1105] duration metric: took 2.87200744s to wait for elevateKubeSystemPrivileges
	I0929 14:03:25.852646 1443614 kubeadm.go:394] duration metric: took 29.982238819s to StartCluster
	I0929 14:03:25.852663 1443614 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:03:25.852724 1443614 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:03:25.853667 1443614 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:03:25.853873 1443614 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:03:25.854022 1443614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 14:03:25.854283 1443614 config.go:182] Loaded profile config "calico-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:03:25.854326 1443614 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:03:25.854388 1443614 addons.go:69] Setting storage-provisioner=true in profile "calico-212797"
	I0929 14:03:25.854403 1443614 addons.go:238] Setting addon storage-provisioner=true in "calico-212797"
	I0929 14:03:25.854424 1443614 host.go:66] Checking if "calico-212797" exists ...
	I0929 14:03:25.854916 1443614 cli_runner.go:164] Run: docker container inspect calico-212797 --format={{.State.Status}}
	I0929 14:03:25.855869 1443614 addons.go:69] Setting default-storageclass=true in profile "calico-212797"
	I0929 14:03:25.855893 1443614 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-212797"
	I0929 14:03:25.856186 1443614 cli_runner.go:164] Run: docker container inspect calico-212797 --format={{.State.Status}}
	I0929 14:03:25.857677 1443614 out.go:179] * Verifying Kubernetes components...
	I0929 14:03:25.861502 1443614 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:03:25.917320 1443614 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:03:25.917899 1443614 addons.go:238] Setting addon default-storageclass=true in "calico-212797"
	I0929 14:03:25.917938 1443614 host.go:66] Checking if "calico-212797" exists ...
	I0929 14:03:25.918350 1443614 cli_runner.go:164] Run: docker container inspect calico-212797 --format={{.State.Status}}
	I0929 14:03:25.920492 1443614 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:03:25.920523 1443614 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:03:25.920585 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:03:25.964992 1443614 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:03:25.965014 1443614 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:03:25.965075 1443614 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-212797
	I0929 14:03:25.977587 1443614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa Username:docker}
	I0929 14:03:26.007093 1443614 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34238 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/calico-212797/id_rsa Username:docker}
	I0929 14:03:26.195668 1443614 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 14:03:26.195800 1443614 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:03:26.257817 1443614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:03:26.303901 1443614 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:03:27.589622 1443614 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.393799063s)
	I0929 14:03:27.590681 1443614 node_ready.go:35] waiting up to 15m0s for node "calico-212797" to be "Ready" ...
	I0929 14:03:27.591125 1443614 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.76.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.395425243s)
	I0929 14:03:27.591162 1443614 start.go:976] {"host.minikube.internal": 192.168.76.1} host record injected into CoreDNS's ConfigMap
	I0929 14:03:27.637450 1443614 node_ready.go:49] node "calico-212797" is "Ready"
	I0929 14:03:27.637528 1443614 node_ready.go:38] duration metric: took 46.787836ms for node "calico-212797" to be "Ready" ...
	I0929 14:03:27.637559 1443614 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:03:27.637649 1443614 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:03:28.003094 1443614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.745237826s)
	I0929 14:03:28.003184 1443614 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.699214076s)
	I0929 14:03:28.003485 1443614 api_server.go:72] duration metric: took 2.149585696s to wait for apiserver process to appear ...
	I0929 14:03:28.003505 1443614 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:03:28.003526 1443614 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:03:28.027212 1443614 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 14:03:28.028699 1443614 api_server.go:141] control plane version: v1.34.0
	I0929 14:03:28.028735 1443614 api_server.go:131] duration metric: took 25.222619ms to wait for apiserver health ...
	I0929 14:03:28.028745 1443614 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:03:28.036081 1443614 system_pods.go:59] 10 kube-system pods found
	I0929 14:03:28.036126 1443614 system_pods.go:61] "calico-kube-controllers-59556d9b4c-zmjl5" [6eb0da5b-0c69-46c9-a600-4404a3b9b9d2] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 14:03:28.036137 1443614 system_pods.go:61] "calico-node-gdclv" [c3b760c9-294a-4405-8868-bb0f92072326] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 14:03:28.036148 1443614 system_pods.go:61] "coredns-66bc5c9577-dtx77" [2bc59934-92af-4922-b611-33e8e48fc6ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.036156 1443614 system_pods.go:61] "coredns-66bc5c9577-h2v7h" [350216d3-c01e-4300-9df5-c78fd19ba429] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.036161 1443614 system_pods.go:61] "etcd-calico-212797" [39d7c3e7-991f-4447-9c23-60b46f63a128] Running
	I0929 14:03:28.036167 1443614 system_pods.go:61] "kube-apiserver-calico-212797" [11e6dfd0-7b98-4d6a-93eb-abf6dc6c55fa] Running
	I0929 14:03:28.036175 1443614 system_pods.go:61] "kube-controller-manager-calico-212797" [973ada77-667a-42fa-be15-dbcaa1dfec1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:03:28.036181 1443614 system_pods.go:61] "kube-proxy-q2pqh" [2f541a0a-a431-4db5-9e21-6af570318f7a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:03:28.036189 1443614 system_pods.go:61] "kube-scheduler-calico-212797" [28d64b8e-cb58-4949-aeb7-de18e21cfe7e] Running
	I0929 14:03:28.036194 1443614 system_pods.go:61] "storage-provisioner" [bd294ad1-3d5f-4e4d-bcbb-b2cdbd1b9aef] Pending
	I0929 14:03:28.036208 1443614 system_pods.go:74] duration metric: took 7.455744ms to wait for pod list to return data ...
	I0929 14:03:28.036217 1443614 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:03:28.036469 1443614 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0929 14:03:28.039664 1443614 addons.go:514] duration metric: took 2.185311718s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0929 14:03:28.051053 1443614 default_sa.go:45] found service account: "default"
	I0929 14:03:28.051076 1443614 default_sa.go:55] duration metric: took 14.848712ms for default service account to be created ...
	I0929 14:03:28.051086 1443614 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:03:28.059300 1443614 system_pods.go:86] 10 kube-system pods found
	I0929 14:03:28.059338 1443614 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zmjl5" [6eb0da5b-0c69-46c9-a600-4404a3b9b9d2] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 14:03:28.059348 1443614 system_pods.go:89] "calico-node-gdclv" [c3b760c9-294a-4405-8868-bb0f92072326] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 14:03:28.059358 1443614 system_pods.go:89] "coredns-66bc5c9577-dtx77" [2bc59934-92af-4922-b611-33e8e48fc6ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.059369 1443614 system_pods.go:89] "coredns-66bc5c9577-h2v7h" [350216d3-c01e-4300-9df5-c78fd19ba429] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.059377 1443614 system_pods.go:89] "etcd-calico-212797" [39d7c3e7-991f-4447-9c23-60b46f63a128] Running
	I0929 14:03:28.059382 1443614 system_pods.go:89] "kube-apiserver-calico-212797" [11e6dfd0-7b98-4d6a-93eb-abf6dc6c55fa] Running
	I0929 14:03:28.059393 1443614 system_pods.go:89] "kube-controller-manager-calico-212797" [973ada77-667a-42fa-be15-dbcaa1dfec1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:03:28.059402 1443614 system_pods.go:89] "kube-proxy-q2pqh" [2f541a0a-a431-4db5-9e21-6af570318f7a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:03:28.059408 1443614 system_pods.go:89] "kube-scheduler-calico-212797" [28d64b8e-cb58-4949-aeb7-de18e21cfe7e] Running
	I0929 14:03:28.059412 1443614 system_pods.go:89] "storage-provisioner" [bd294ad1-3d5f-4e4d-bcbb-b2cdbd1b9aef] Pending
	I0929 14:03:28.059441 1443614 retry.go:31] will retry after 283.737622ms: missing components: kube-proxy
	I0929 14:03:28.095618 1443614 kapi.go:214] "coredns" deployment in "kube-system" namespace and "calico-212797" context rescaled to 1 replicas
	I0929 14:03:28.348606 1443614 system_pods.go:86] 10 kube-system pods found
	I0929 14:03:28.348644 1443614 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zmjl5" [6eb0da5b-0c69-46c9-a600-4404a3b9b9d2] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 14:03:28.348654 1443614 system_pods.go:89] "calico-node-gdclv" [c3b760c9-294a-4405-8868-bb0f92072326] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 14:03:28.348661 1443614 system_pods.go:89] "coredns-66bc5c9577-dtx77" [2bc59934-92af-4922-b611-33e8e48fc6ba] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.348670 1443614 system_pods.go:89] "coredns-66bc5c9577-h2v7h" [350216d3-c01e-4300-9df5-c78fd19ba429] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.348675 1443614 system_pods.go:89] "etcd-calico-212797" [39d7c3e7-991f-4447-9c23-60b46f63a128] Running
	I0929 14:03:28.348680 1443614 system_pods.go:89] "kube-apiserver-calico-212797" [11e6dfd0-7b98-4d6a-93eb-abf6dc6c55fa] Running
	I0929 14:03:28.348692 1443614 system_pods.go:89] "kube-controller-manager-calico-212797" [973ada77-667a-42fa-be15-dbcaa1dfec1d] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:03:28.348702 1443614 system_pods.go:89] "kube-proxy-q2pqh" [2f541a0a-a431-4db5-9e21-6af570318f7a] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:03:28.348707 1443614 system_pods.go:89] "kube-scheduler-calico-212797" [28d64b8e-cb58-4949-aeb7-de18e21cfe7e] Running
	I0929 14:03:28.348713 1443614 system_pods.go:89] "storage-provisioner" [bd294ad1-3d5f-4e4d-bcbb-b2cdbd1b9aef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:03:28.348737 1443614 retry.go:31] will retry after 324.735519ms: missing components: kube-proxy
	I0929 14:03:28.677774 1443614 system_pods.go:86] 10 kube-system pods found
	I0929 14:03:28.677855 1443614 system_pods.go:89] "calico-kube-controllers-59556d9b4c-zmjl5" [6eb0da5b-0c69-46c9-a600-4404a3b9b9d2] Pending / Ready:ContainersNotReady (containers with unready status: [calico-kube-controllers]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-kube-controllers])
	I0929 14:03:28.677898 1443614 system_pods.go:89] "calico-node-gdclv" [c3b760c9-294a-4405-8868-bb0f92072326] Pending / Initialized:ContainersNotInitialized (containers with incomplete status: [upgrade-ipam install-cni mount-bpffs]) / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
	I0929 14:03:28.677920 1443614 system_pods.go:89] "coredns-66bc5c9577-dtx77" [2bc59934-92af-4922-b611-33e8e48fc6ba] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.677956 1443614 system_pods.go:89] "coredns-66bc5c9577-h2v7h" [350216d3-c01e-4300-9df5-c78fd19ba429] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:03:28.677978 1443614 system_pods.go:89] "etcd-calico-212797" [39d7c3e7-991f-4447-9c23-60b46f63a128] Running
	I0929 14:03:28.677997 1443614 system_pods.go:89] "kube-apiserver-calico-212797" [11e6dfd0-7b98-4d6a-93eb-abf6dc6c55fa] Running
	I0929 14:03:28.678029 1443614 system_pods.go:89] "kube-controller-manager-calico-212797" [973ada77-667a-42fa-be15-dbcaa1dfec1d] Running
	I0929 14:03:28.678051 1443614 system_pods.go:89] "kube-proxy-q2pqh" [2f541a0a-a431-4db5-9e21-6af570318f7a] Running
	I0929 14:03:28.678068 1443614 system_pods.go:89] "kube-scheduler-calico-212797" [28d64b8e-cb58-4949-aeb7-de18e21cfe7e] Running
	I0929 14:03:28.678087 1443614 system_pods.go:89] "storage-provisioner" [bd294ad1-3d5f-4e4d-bcbb-b2cdbd1b9aef] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:03:28.678120 1443614 system_pods.go:126] duration metric: took 627.026386ms to wait for k8s-apps to be running ...
	I0929 14:03:28.678145 1443614 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:03:28.678231 1443614 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:03:28.693535 1443614 system_svc.go:56] duration metric: took 15.380228ms WaitForService to wait for kubelet
	I0929 14:03:28.693608 1443614 kubeadm.go:578] duration metric: took 2.839711288s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:03:28.693643 1443614 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:03:28.697487 1443614 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:03:28.697568 1443614 node_conditions.go:123] node cpu capacity is 2
	I0929 14:03:28.697595 1443614 node_conditions.go:105] duration metric: took 3.931177ms to run NodePressure ...
	I0929 14:03:28.697619 1443614 start.go:241] waiting for startup goroutines ...
	I0929 14:03:28.697652 1443614 start.go:246] waiting for cluster config update ...
	I0929 14:03:28.697681 1443614 start.go:255] writing updated cluster config ...
	I0929 14:03:28.698004 1443614 ssh_runner.go:195] Run: rm -f paused
	I0929 14:03:28.702079 1443614 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:03:28.706880 1443614 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-dtx77" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:03:30.713408 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:32.724482 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:35.216064 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:37.227032 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:39.716485 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:42.214897 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:44.713500 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:46.722039 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:49.229702 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:51.230492 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:53.232367 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	W0929 14:03:55.284331 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-dtx77" is not "Ready", error: <nil>
	I0929 14:03:56.213202 1443614 pod_ready.go:99] pod "coredns-66bc5c9577-dtx77" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-dtx77" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-dtx77" not found
	I0929 14:03:56.213231 1443614 pod_ready.go:86] duration metric: took 27.506279803s for pod "coredns-66bc5c9577-dtx77" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:03:56.213242 1443614 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-h2v7h" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:03:58.219291 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:00.257774 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:02.722464 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:05.223787 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:07.719796 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:09.727494 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:12.218796 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:14.218891 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:16.240834 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:18.731130 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:20.736948 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:23.224038 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:25.719686 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:27.719753 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:29.719917 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:32.223125 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:34.722191 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:37.224391 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:39.719137 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:41.721330 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:44.220570 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:46.221131 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:48.720277 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:50.724792 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:52.749745 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:55.224226 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:57.236714 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:04:59.721926 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:02.218503 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:04.219144 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:06.224971 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:08.719104 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:11.218587 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:13.220925 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:15.721831 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:18.219940 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:20.718759 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:22.721588 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:25.226897 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:27.722216 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:29.725486 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:32.222512 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:34.226345 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:36.721303 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:38.721348 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:41.232061 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:43.727458 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:46.238724 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:48.722236 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:51.222984 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:53.719841 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:56.219566 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:05:58.220124 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:00.226836 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:02.721044 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:04.722282 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:06.722747 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:09.218442 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:11.218904 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:13.227873 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:15.720088 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:18.221021 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:20.719358 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:22.720730 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:25.219714 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:27.221546 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:29.719202 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:31.719441 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:34.218645 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:36.219890 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:38.720195 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:41.220786 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:43.719242 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:45.719462 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:48.218499 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:50.219638 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:52.722933 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:55.218844 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:06:57.718913 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:00.242284 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:02.721565 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:04.727174 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:07.221759 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:09.719699 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:12.219638 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:14.221528 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:16.718274 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:18.720660 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:20.732828 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:23.223988 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:25.740733 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	W0929 14:07:28.225121 1443614 pod_ready.go:104] pod "coredns-66bc5c9577-h2v7h" is not "Ready", error: <nil>
	I0929 14:07:28.702351 1443614 pod_ready.go:86] duration metric: took 3m32.489093122s for pod "coredns-66bc5c9577-h2v7h" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:07:28.702379 1443614 pod_ready.go:65] not all pods in "kube-system" namespace with "k8s-app=kube-dns" label are "Ready", will retry: waitPodCondition: context deadline exceeded
	I0929 14:07:28.702393 1443614 pod_ready.go:40] duration metric: took 4m0.00024189s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:07:28.706721 1443614 out.go:203] 
	W0929 14:07:28.709949 1443614 out.go:285] X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	X Exiting due to GUEST_START: extra waiting: WaitExtra: context deadline exceeded
	I0929 14:07:28.713278 1443614 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/calico/Start (288.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2srlk" [0ead75df-9638-4d39-af53-82c7b8b1bc64] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:337: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-062731 -n old-k8s-version-062731
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:22:40.811622812 +0000 UTC m=+4854.076836173
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-062731 describe po kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context old-k8s-version-062731 describe po kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-2srlk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-062731/192.168.85.2
Start Time:       Mon, 29 Sep 2025 14:13:38 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6v64 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-r6v64:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  9m2s                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk to old-k8s-version-062731
Normal   Pulling    7m43s (x4 over 9m1s)  kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     7m42s (x4 over 9m1s)  kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     7m42s (x4 over 9m1s)  kubelet            Error: ErrImagePull
Warning  Failed     7m15s (x6 over 9m)    kubelet            Error: ImagePullBackOff
Normal   BackOff    4m1s (x20 over 9m)    kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context old-k8s-version-062731 logs kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context old-k8s-version-062731 logs kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard: exit status 1 (152.193627ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-2srlk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context old-k8s-version-062731 logs kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-062731
helpers_test.go:243: (dbg) docker inspect old-k8s-version-062731:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945",
	        "Created": "2025-09-29T14:11:34.338221643Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1550943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:13:10.81954335Z",
	            "FinishedAt": "2025-09-29T14:13:09.92245346Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/hosts",
	        "LogPath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945-json.log",
	        "Name": "/old-k8s-version-062731",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-062731:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-062731",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945",
	                "LowerDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-062731",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-062731/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-062731",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-062731",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-062731",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9fc68e444a9d0af669b47990d0163c5f87dffe2e2cbfc5be659a4669112c20ac",
	            "SandboxKey": "/var/run/docker/netns/9fc68e444a9d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34286"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34287"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34290"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34288"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34289"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-062731": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:9e:18:d4:ee:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3df0265d4a1e81a524901d6aaa18a947950c22eeebbdc38ea9e67bd3e2f8ebbf",
	                    "EndpointID": "8845ad082418d90ac76bb0f232add59363516ccc1994a57e2918033907e4b693",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-062731",
	                        "5f28f54ae5c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-062731 -n old-k8s-version-062731
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-062731 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-062731 logs -n 25: (1.565292891s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p kubenet-212797 sudo docker system info                                                                                                                                                                                                       │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                 │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                           │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cri-dockerd --version                                                                                                                                                                                                    │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat containerd --no-pager                                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /etc/containerd/config.toml                                                                                                                                                                                          │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo containerd config dump                                                                                                                                                                                                   │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │                     │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat crio --no-pager                                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                  │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo crio config                                                                                                                                                                                                              │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ delete  │ -p kubenet-212797                                                                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-062731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ stop    │ -p old-k8s-version-062731 --alsologtostderr -v=3                                                                                                                                                                                                │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-062731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ start   │ -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-983174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ stop    │ -p no-preload-983174 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:14 UTC │
	│ addons  │ enable dashboard -p no-preload-983174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:14 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:15 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:14:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:14:09.446915 1556666 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:14:09.447165 1556666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:14:09.447200 1556666 out.go:374] Setting ErrFile to fd 2...
	I0929 14:14:09.447220 1556666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:14:09.447495 1556666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:14:09.447946 1556666 out.go:368] Setting JSON to false
	I0929 14:14:09.449072 1556666 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":21402,"bootTime":1759133848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:14:09.449209 1556666 start.go:140] virtualization:  
	I0929 14:14:09.452257 1556666 out.go:179] * [no-preload-983174] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:14:09.456099 1556666 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:14:09.456265 1556666 notify.go:220] Checking for updates...
	I0929 14:14:09.459654 1556666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:14:09.462628 1556666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:09.465578 1556666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:14:09.468487 1556666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:14:09.471340 1556666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:14:09.474663 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:09.475308 1556666 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:14:09.502198 1556666 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:14:09.502336 1556666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:14:09.561225 1556666 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:14:09.551094641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:14:09.561332 1556666 docker.go:318] overlay module found
	I0929 14:14:09.566299 1556666 out.go:179] * Using the docker driver based on existing profile
	I0929 14:14:09.569150 1556666 start.go:304] selected driver: docker
	I0929 14:14:09.569168 1556666 start.go:924] validating driver "docker" against &{Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:09.569285 1556666 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:14:09.570017 1556666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:14:09.624942 1556666 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:14:09.615982942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:14:09.625279 1556666 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:14:09.625316 1556666 cni.go:84] Creating CNI manager for ""
	I0929 14:14:09.625393 1556666 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:14:09.625438 1556666 start.go:348] cluster config:
	{Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:09.628718 1556666 out.go:179] * Starting "no-preload-983174" primary control-plane node in "no-preload-983174" cluster
	I0929 14:14:09.631576 1556666 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:14:09.634419 1556666 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:14:09.637280 1556666 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:14:09.637361 1556666 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:14:09.637432 1556666 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/config.json ...
	I0929 14:14:09.637750 1556666 cache.go:107] acquiring lock: {Name:mkbf722085a8c6cd247df0776d9bc514bf99781b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.637851 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 14:14:09.637923 1556666 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 177.364µs
	I0929 14:14:09.637940 1556666 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 14:14:09.637955 1556666 cache.go:107] acquiring lock: {Name:mk30f19321bc3b42d291063dc85a66705246f7e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638002 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 14:14:09.638013 1556666 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0" took 60.275µs
	I0929 14:14:09.638030 1556666 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 14:14:09.638041 1556666 cache.go:107] acquiring lock: {Name:mk2f793d2d4a07e670fda7f22f83aeba125cecc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638080 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 14:14:09.638089 1556666 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0" took 49.937µs
	I0929 14:14:09.638096 1556666 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 14:14:09.638106 1556666 cache.go:107] acquiring lock: {Name:mkc74eaa586dd62e4e7bb32f19e0778bae528158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638136 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 14:14:09.638144 1556666 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0" took 39.968µs
	I0929 14:14:09.638151 1556666 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 14:14:09.638160 1556666 cache.go:107] acquiring lock: {Name:mk3285eeb8c57d45d5a563781eb999cc08d9baf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638189 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 14:14:09.638197 1556666 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0" took 39.114µs
	I0929 14:14:09.638204 1556666 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 14:14:09.638213 1556666 cache.go:107] acquiring lock: {Name:mkbc5650bf66f5bda3f443eba33f59d2953325c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638242 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0929 14:14:09.638251 1556666 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.09µs
	I0929 14:14:09.638257 1556666 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 14:14:09.638266 1556666 cache.go:107] acquiring lock: {Name:mk1e873b26d63631af61d7ed1e9134ed28465b53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638295 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 14:14:09.638304 1556666 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.049µs
	I0929 14:14:09.638310 1556666 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 14:14:09.638330 1556666 cache.go:107] acquiring lock: {Name:mk303304602324c8e2b92b82ec131997d8ec523d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638360 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 14:14:09.638369 1556666 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 46.787µs
	I0929 14:14:09.638375 1556666 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 14:14:09.638381 1556666 cache.go:87] Successfully saved all images to host disk.
	I0929 14:14:09.656713 1556666 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:14:09.656737 1556666 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:14:09.656754 1556666 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:14:09.656776 1556666 start.go:360] acquireMachinesLock for no-preload-983174: {Name:mke1e7fc5da9d04523b73b29b2664621e2ac37f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.656829 1556666 start.go:364] duration metric: took 38.516µs to acquireMachinesLock for "no-preload-983174"
	I0929 14:14:09.656855 1556666 start.go:96] Skipping create...Using existing machine configuration
	I0929 14:14:09.656864 1556666 fix.go:54] fixHost starting: 
	I0929 14:14:09.657131 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:09.674254 1556666 fix.go:112] recreateIfNeeded on no-preload-983174: state=Stopped err=<nil>
	W0929 14:14:09.674292 1556666 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 14:14:09.677584 1556666 out.go:252] * Restarting existing docker container for "no-preload-983174" ...
	I0929 14:14:09.677673 1556666 cli_runner.go:164] Run: docker start no-preload-983174
	I0929 14:14:09.938637 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:09.957785 1556666 kic.go:430] container "no-preload-983174" state is running.
	I0929 14:14:09.959526 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:09.982824 1556666 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/config.json ...
	I0929 14:14:09.983047 1556666 machine.go:93] provisionDockerMachine start ...
	I0929 14:14:09.983106 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:10.007116 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:10.007471 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:10.007482 1556666 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:14:10.008211 1556666 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54734->127.0.0.1:34291: read: connection reset by peer
	I0929 14:14:13.168318 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-983174
	
	I0929 14:14:13.168401 1556666 ubuntu.go:182] provisioning hostname "no-preload-983174"
	I0929 14:14:13.168478 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:13.191152 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:13.191553 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:13.191572 1556666 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-983174 && echo "no-preload-983174" | sudo tee /etc/hostname
	I0929 14:14:13.354864 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-983174
	
	I0929 14:14:13.354956 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:13.373283 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:13.373591 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:13.373619 1556666 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-983174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-983174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-983174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:14:13.520952 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:14:13.520977 1556666 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:14:13.520994 1556666 ubuntu.go:190] setting up certificates
	I0929 14:14:13.521004 1556666 provision.go:84] configureAuth start
	I0929 14:14:13.521063 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:13.538844 1556666 provision.go:143] copyHostCerts
	I0929 14:14:13.538915 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:14:13.538938 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:14:13.539019 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:14:13.539171 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:14:13.539183 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:14:13.539212 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:14:13.539284 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:14:13.539295 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:14:13.539321 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:14:13.539380 1556666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.no-preload-983174 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-983174]
	I0929 14:14:14.175612 1556666 provision.go:177] copyRemoteCerts
	I0929 14:14:14.175688 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:14:14.175734 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.193690 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:14.293882 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 14:14:14.318335 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 14:14:14.344180 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:14:14.369452 1556666 provision.go:87] duration metric: took 848.423896ms to configureAuth
	I0929 14:14:14.369478 1556666 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:14:14.369677 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:14.369735 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.387401 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.387709 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.387723 1556666 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:14:14.529052 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:14:14.529074 1556666 ubuntu.go:71] root file system type: overlay
	I0929 14:14:14.529186 1556666 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:14:14.529255 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.547682 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.547997 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.548083 1556666 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:14:14.705061 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:14:14.705158 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.723963 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.724277 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.724302 1556666 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:14:14.871746 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:14:14.871808 1556666 machine.go:96] duration metric: took 4.888752094s to provisionDockerMachine
	I0929 14:14:14.871835 1556666 start.go:293] postStartSetup for "no-preload-983174" (driver="docker")
	I0929 14:14:14.871865 1556666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:14:14.871951 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:14:14.872027 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.889467 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:14.990105 1556666 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:14:14.993594 1556666 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:14:14.993625 1556666 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:14:14.993636 1556666 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:14:14.993642 1556666 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:14:14.993655 1556666 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:14:14.993707 1556666 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:14:14.993801 1556666 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:14:14.993924 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:14:15.010275 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:14:15.041050 1556666 start.go:296] duration metric: took 169.180506ms for postStartSetup
	I0929 14:14:15.041206 1556666 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:14:15.041284 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.059737 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.157816 1556666 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:14:15.162824 1556666 fix.go:56] duration metric: took 5.505952464s for fixHost
	I0929 14:14:15.162849 1556666 start.go:83] releasing machines lock for "no-preload-983174", held for 5.506005527s
	I0929 14:14:15.162917 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:15.180675 1556666 ssh_runner.go:195] Run: cat /version.json
	I0929 14:14:15.180722 1556666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:14:15.180777 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.180726 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.198974 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.200600 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.292199 1556666 ssh_runner.go:195] Run: systemctl --version
	I0929 14:14:15.427571 1556666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:14:15.431914 1556666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:14:15.452046 1556666 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:14:15.452120 1556666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:14:15.461413 1556666 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 14:14:15.461440 1556666 start.go:495] detecting cgroup driver to use...
	I0929 14:14:15.461473 1556666 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:14:15.461565 1556666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:14:15.477405 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:14:15.489101 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:14:15.499317 1556666 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:14:15.499406 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:14:15.512856 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:14:15.522949 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:14:15.533163 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:14:15.543072 1556666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:14:15.552630 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:14:15.563081 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:14:15.573609 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:14:15.583981 1556666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:14:15.593828 1556666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:14:15.602598 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:15.696246 1556666 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:14:15.784831 1556666 start.go:495] detecting cgroup driver to use...
	I0929 14:14:15.784911 1556666 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:14:15.784990 1556666 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:14:15.799531 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:14:15.815605 1556666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:14:15.840157 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:14:15.852831 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:14:15.865897 1556666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:14:15.883856 1556666 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:14:15.887405 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:14:15.896336 1556666 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:14:15.915875 1556666 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:14:16.027307 1556666 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:14:16.115830 1556666 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:14:16.116008 1556666 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:14:16.139611 1556666 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:14:16.151714 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:16.249049 1556666 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:14:16.778694 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:14:16.790316 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:14:16.802021 1556666 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 14:14:16.815179 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:14:16.827094 1556666 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:14:16.928082 1556666 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:14:17.034122 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.145418 1556666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:14:17.161368 1556666 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:14:17.174566 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.275531 1556666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:14:17.385986 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:14:17.400398 1556666 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:14:17.400473 1556666 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:14:17.404874 1556666 start.go:563] Will wait 60s for crictl version
	I0929 14:14:17.404984 1556666 ssh_runner.go:195] Run: which crictl
	I0929 14:14:17.408474 1556666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:14:17.529650 1556666 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:14:17.529725 1556666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:14:17.554294 1556666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:14:17.585653 1556666 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:14:17.585796 1556666 cli_runner.go:164] Run: docker network inspect no-preload-983174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:14:17.607543 1556666 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:14:17.611447 1556666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:14:17.622262 1556666 kubeadm.go:875] updating cluster {Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:14:17.622371 1556666 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:14:17.622426 1556666 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:14:17.641266 1556666 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:14:17.641291 1556666 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:14:17.641301 1556666 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 docker true true} ...
	I0929 14:14:17.641412 1556666 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-983174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:14:17.641479 1556666 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:14:17.706589 1556666 cni.go:84] Creating CNI manager for ""
	I0929 14:14:17.706614 1556666 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:14:17.706628 1556666 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:14:17.706649 1556666 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-983174 NodeName:no-preload-983174 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:14:17.706779 1556666 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-983174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:14:17.706850 1556666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:14:17.715757 1556666 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:14:17.715829 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:14:17.724341 1556666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 14:14:17.742721 1556666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:14:17.761018 1556666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0929 14:14:17.780275 1556666 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:14:17.783823 1556666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:14:17.794621 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.897706 1556666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:14:17.912470 1556666 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174 for IP: 192.168.76.2
	I0929 14:14:17.912492 1556666 certs.go:194] generating shared ca certs ...
	I0929 14:14:17.912534 1556666 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:17.912697 1556666 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:14:17.912749 1556666 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:14:17.912761 1556666 certs.go:256] generating profile certs ...
	I0929 14:14:17.912856 1556666 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.key
	I0929 14:14:17.912930 1556666 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.key.8135a500
	I0929 14:14:17.912982 1556666 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.key
	I0929 14:14:17.913106 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:14:17.913160 1556666 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:14:17.913173 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:14:17.913206 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:14:17.913232 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:14:17.913261 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:14:17.913318 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:14:17.913997 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:14:17.956896 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:14:17.985873 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:14:18.028989 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:14:18.063448 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 14:14:18.096280 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 14:14:18.147356 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:14:18.179221 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:14:18.209546 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:14:18.242132 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:14:18.273433 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:14:18.303036 1556666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:14:18.322286 1556666 ssh_runner.go:195] Run: openssl version
	I0929 14:14:18.327639 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:14:18.342520 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.346354 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.346432 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.353769 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:14:18.362808 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:14:18.372034 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.375576 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.375643 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.382977 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:14:18.392026 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:14:18.402458 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.405833 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.405908 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.412741 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:14:18.421756 1556666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:14:18.425436 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 14:14:18.432235 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 14:14:18.439307 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 14:14:18.446668 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 14:14:18.453723 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 14:14:18.460904 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 14:14:18.467893 1556666 kubeadm.go:392] StartCluster: {Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:18.468068 1556666 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:14:18.485585 1556666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:14:18.497293 1556666 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 14:14:18.497323 1556666 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 14:14:18.497382 1556666 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 14:14:18.506278 1556666 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 14:14:18.506918 1556666 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-983174" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:18.507237 1556666 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1125775/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-983174" cluster setting kubeconfig missing "no-preload-983174" context setting]
	I0929 14:14:18.507707 1556666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.509252 1556666 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 14:14:18.517799 1556666 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 14:14:18.517889 1556666 kubeadm.go:593] duration metric: took 20.559326ms to restartPrimaryControlPlane
	I0929 14:14:18.517914 1556666 kubeadm.go:394] duration metric: took 50.028401ms to StartCluster
	I0929 14:14:18.517962 1556666 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.518060 1556666 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:18.519066 1556666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.519359 1556666 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:14:18.519673 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:18.519746 1556666 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:14:18.519847 1556666 addons.go:69] Setting storage-provisioner=true in profile "no-preload-983174"
	I0929 14:14:18.519867 1556666 addons.go:238] Setting addon storage-provisioner=true in "no-preload-983174"
	W0929 14:14:18.519877 1556666 addons.go:247] addon storage-provisioner should already be in state true
	I0929 14:14:18.519854 1556666 addons.go:69] Setting default-storageclass=true in profile "no-preload-983174"
	I0929 14:14:18.519904 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.519920 1556666 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-983174"
	I0929 14:14:18.520315 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.520394 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.520954 1556666 addons.go:69] Setting metrics-server=true in profile "no-preload-983174"
	I0929 14:14:18.520978 1556666 addons.go:238] Setting addon metrics-server=true in "no-preload-983174"
	W0929 14:14:18.520986 1556666 addons.go:247] addon metrics-server should already be in state true
	I0929 14:14:18.521025 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.521459 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.524831 1556666 addons.go:69] Setting dashboard=true in profile "no-preload-983174"
	I0929 14:14:18.524862 1556666 addons.go:238] Setting addon dashboard=true in "no-preload-983174"
	W0929 14:14:18.524872 1556666 addons.go:247] addon dashboard should already be in state true
	I0929 14:14:18.524910 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.525477 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.526051 1556666 out.go:179] * Verifying Kubernetes components...
	I0929 14:14:18.530866 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:18.567077 1556666 addons.go:238] Setting addon default-storageclass=true in "no-preload-983174"
	W0929 14:14:18.567104 1556666 addons.go:247] addon default-storageclass should already be in state true
	I0929 14:14:18.567131 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.567570 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.581520 1556666 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:14:18.584559 1556666 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:14:18.584588 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:14:18.584654 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.593397 1556666 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 14:14:18.593473 1556666 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 14:14:18.597233 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 14:14:18.597259 1556666 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 14:14:18.597325 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.603277 1556666 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 14:14:18.607154 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 14:14:18.607180 1556666 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 14:14:18.607257 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.629305 1556666 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:18.629327 1556666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:14:18.629390 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.668010 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.668341 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.688724 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.701428 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.731581 1556666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:14:18.818506 1556666 node_ready.go:35] waiting up to 6m0s for node "no-preload-983174" to be "Ready" ...
	I0929 14:14:18.852403 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 14:14:18.852425 1556666 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 14:14:18.898176 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 14:14:18.898249 1556666 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 14:14:18.910910 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 14:14:18.910979 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 14:14:18.947482 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:14:18.978364 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 14:14:18.978391 1556666 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 14:14:19.033790 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:19.033863 1556666 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 14:14:19.075517 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 14:14:19.075595 1556666 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 14:14:19.079301 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:19.154767 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:19.219263 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 14:14:19.219348 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 14:14:19.420384 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 14:14:19.420459 1556666 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 14:14:19.739905 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 14:14:19.739987 1556666 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 14:14:19.746335 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.746436 1556666 retry.go:31] will retry after 131.359244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 14:14:19.768963 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.769045 1556666 retry.go:31] will retry after 340.512991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.792479 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 14:14:19.792677 1556666 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 14:14:19.878811 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0929 14:14:19.892912 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.892989 1556666 retry.go:31] will retry after 313.861329ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.937588 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 14:14:19.937617 1556666 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 14:14:19.997110 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:14:19.997138 1556666 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 14:14:20.026643 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:14:20.110232 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:20.207752 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:24.291836 1556666 node_ready.go:49] node "no-preload-983174" is "Ready"
	I0929 14:14:24.291865 1556666 node_ready.go:38] duration metric: took 5.473270305s for node "no-preload-983174" to be "Ready" ...
	I0929 14:14:24.291882 1556666 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:14:24.291942 1556666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:14:26.299144 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420249689s)
	I0929 14:14:26.299258 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.272581766s)
	I0929 14:14:26.299391 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.189124116s)
	I0929 14:14:26.302416 1556666 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-983174 addons enable metrics-server
	
	I0929 14:14:26.411012 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.203209897s)
	I0929 14:14:26.411051 1556666 addons.go:479] Verifying addon metrics-server=true in "no-preload-983174"
	I0929 14:14:26.411223 1556666 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.119270583s)
	I0929 14:14:26.411237 1556666 api_server.go:72] duration metric: took 7.891819741s to wait for apiserver process to appear ...
	I0929 14:14:26.411242 1556666 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:14:26.411258 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:26.415317 1556666 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
	I0929 14:14:26.418321 1556666 addons.go:514] duration metric: took 7.898562435s for enable addons: enabled=[storage-provisioner dashboard default-storageclass metrics-server]
	I0929 14:14:26.422832 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:14:26.422855 1556666 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:14:26.911383 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:26.926872 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:14:26.926902 1556666 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:14:27.412099 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:27.421007 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 14:14:27.422346 1556666 api_server.go:141] control plane version: v1.34.0
	I0929 14:14:27.422373 1556666 api_server.go:131] duration metric: took 1.011125009s to wait for apiserver health ...
	I0929 14:14:27.422383 1556666 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:14:27.428732 1556666 system_pods.go:59] 8 kube-system pods found
	I0929 14:14:27.428777 1556666 system_pods.go:61] "coredns-66bc5c9577-846n7" [dd192e93-efcd-416c-b3f2-c56860e96667] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:14:27.428786 1556666 system_pods.go:61] "etcd-no-preload-983174" [5aa66d56-4e0b-426f-af8c-880f7e3c02db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:14:27.428794 1556666 system_pods.go:61] "kube-apiserver-no-preload-983174" [e9e9910a-f91a-40e2-8152-50c95dc16563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:14:27.428801 1556666 system_pods.go:61] "kube-controller-manager-no-preload-983174" [4cdb0775-7e84-4c1c-90b6-a8d68514159c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:14:27.428829 1556666 system_pods.go:61] "kube-proxy-rjpsv" [640460b1-abcd-4490-a152-ceb13067ffb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:14:27.428851 1556666 system_pods.go:61] "kube-scheduler-no-preload-983174" [5fb52905-6a97-4feb-bc63-6a67be970b9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:14:27.428865 1556666 system_pods.go:61] "metrics-server-746fcd58dc-6pt8w" [db3c374a-7d3e-4ebd-9a71-c1245d62d2ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:14:27.428873 1556666 system_pods.go:61] "storage-provisioner" [3e67c2e9-9826-4557-b747-fec5992144f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:14:27.428883 1556666 system_pods.go:74] duration metric: took 6.494789ms to wait for pod list to return data ...
	I0929 14:14:27.428904 1556666 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:14:27.431458 1556666 default_sa.go:45] found service account: "default"
	I0929 14:14:27.431530 1556666 default_sa.go:55] duration metric: took 2.610441ms for default service account to be created ...
	I0929 14:14:27.431555 1556666 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:14:27.527907 1556666 system_pods.go:86] 8 kube-system pods found
	I0929 14:14:27.527993 1556666 system_pods.go:89] "coredns-66bc5c9577-846n7" [dd192e93-efcd-416c-b3f2-c56860e96667] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:14:27.528017 1556666 system_pods.go:89] "etcd-no-preload-983174" [5aa66d56-4e0b-426f-af8c-880f7e3c02db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:14:27.528052 1556666 system_pods.go:89] "kube-apiserver-no-preload-983174" [e9e9910a-f91a-40e2-8152-50c95dc16563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:14:27.528078 1556666 system_pods.go:89] "kube-controller-manager-no-preload-983174" [4cdb0775-7e84-4c1c-90b6-a8d68514159c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:14:27.528099 1556666 system_pods.go:89] "kube-proxy-rjpsv" [640460b1-abcd-4490-a152-ceb13067ffb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:14:27.528119 1556666 system_pods.go:89] "kube-scheduler-no-preload-983174" [5fb52905-6a97-4feb-bc63-6a67be970b9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:14:27.528137 1556666 system_pods.go:89] "metrics-server-746fcd58dc-6pt8w" [db3c374a-7d3e-4ebd-9a71-c1245d62d2ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:14:27.528165 1556666 system_pods.go:89] "storage-provisioner" [3e67c2e9-9826-4557-b747-fec5992144f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:14:27.528189 1556666 system_pods.go:126] duration metric: took 96.616381ms to wait for k8s-apps to be running ...
	I0929 14:14:27.528211 1556666 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:14:27.528293 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:14:27.542062 1556666 system_svc.go:56] duration metric: took 13.832937ms WaitForService to wait for kubelet
	I0929 14:14:27.542130 1556666 kubeadm.go:578] duration metric: took 9.022710418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:14:27.542161 1556666 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:14:27.544948 1556666 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:14:27.545031 1556666 node_conditions.go:123] node cpu capacity is 2
	I0929 14:14:27.545058 1556666 node_conditions.go:105] duration metric: took 2.879218ms to run NodePressure ...
	I0929 14:14:27.545097 1556666 start.go:241] waiting for startup goroutines ...
	I0929 14:14:27.545120 1556666 start.go:246] waiting for cluster config update ...
	I0929 14:14:27.545144 1556666 start.go:255] writing updated cluster config ...
	I0929 14:14:27.545456 1556666 ssh_runner.go:195] Run: rm -f paused
	I0929 14:14:27.554430 1556666 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:14:27.563260 1556666 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-846n7" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:14:29.608788 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:32.070297 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:34.569602 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:37.069056 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:39.574455 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:42.070030 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:44.070122 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:46.570382 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:49.068692 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:51.068939 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:53.569240 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:56.069416 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:58.569070 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	I0929 14:15:00.160680 1556666 pod_ready.go:94] pod "coredns-66bc5c9577-846n7" is "Ready"
	I0929 14:15:00.160769 1556666 pod_ready.go:86] duration metric: took 32.597436105s for pod "coredns-66bc5c9577-846n7" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.164644 1556666 pod_ready.go:83] waiting for pod "etcd-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.229517 1556666 pod_ready.go:94] pod "etcd-no-preload-983174" is "Ready"
	I0929 14:15:00.229599 1556666 pod_ready.go:86] duration metric: took 64.919216ms for pod "etcd-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.283567 1556666 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.359530 1556666 pod_ready.go:94] pod "kube-apiserver-no-preload-983174" is "Ready"
	I0929 14:15:00.359628 1556666 pod_ready.go:86] duration metric: took 75.979002ms for pod "kube-apiserver-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.372119 1556666 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.383080 1556666 pod_ready.go:94] pod "kube-controller-manager-no-preload-983174" is "Ready"
	I0929 14:15:00.383176 1556666 pod_ready.go:86] duration metric: took 10.963097ms for pod "kube-controller-manager-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.486531 1556666 pod_ready.go:83] waiting for pod "kube-proxy-rjpsv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.885176 1556666 pod_ready.go:94] pod "kube-proxy-rjpsv" is "Ready"
	I0929 14:15:00.885204 1556666 pod_ready.go:86] duration metric: took 398.643571ms for pod "kube-proxy-rjpsv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.085775 1556666 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.485654 1556666 pod_ready.go:94] pod "kube-scheduler-no-preload-983174" is "Ready"
	I0929 14:15:01.485682 1556666 pod_ready.go:86] duration metric: took 399.876397ms for pod "kube-scheduler-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.485696 1556666 pod_ready.go:40] duration metric: took 33.931188768s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:15:01.548843 1556666 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:15:01.552089 1556666 out.go:179] * Done! kubectl is now configured to use "no-preload-983174" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:15:01 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:15:01.015967309Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:15:10 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:15:10.236112629Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:15:10 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:15:10.423704063Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:15:10 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:15:10.423801582Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:15:10 old-k8s-version-062731 cri-dockerd[1211]: time="2025-09-29T14:15:10Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:16:20 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:20.048002721Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:16:20 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:20.150866398Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:16:22 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:22.009688279Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:16:22 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:22.009728788Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:16:22 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:22.012935568Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:16:22 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:22.012989779Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:16:42 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:42.223573295Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:16:42 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:42.426854843Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:16:42 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:16:42.427020030Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:16:42 old-k8s-version-062731 cri-dockerd[1211]: time="2025-09-29T14:16:42Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:19:08 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:08.045009555Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:19:08 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:08.160777072Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:19:09 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:09.008130718Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:19:09 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:09.008180138Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:19:09 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:09.011117393Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:19:09 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:09.011177381Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:19:31 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:31.244233229Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:19:31 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:31.440216334Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:19:31 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:31.440603727Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:19:31 old-k8s-version-062731 cri-dockerd[1211]: time="2025-09-29T14:19:31Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	67ee15eacad42       ba04bb24b9575                                                                                         8 minutes ago       Running             storage-provisioner       2                   e7b278f3fdb3f       storage-provisioner
	be636cd58da4e       1611cd07b61d5                                                                                         9 minutes ago       Running             busybox                   1                   4d45421593112       busybox
	7c2d1182a5bf4       97e04611ad434                                                                                         9 minutes ago       Running             coredns                   1                   ea4213f958bd9       coredns-5dd5756b68-pld27
	4de42700ff466       ba04bb24b9575                                                                                         9 minutes ago       Exited              storage-provisioner       1                   e7b278f3fdb3f       storage-provisioner
	43868fe5fc274       940f54a5bcae9                                                                                         9 minutes ago       Running             kube-proxy                1                   cc27fe045f039       kube-proxy-lb4zs
	78bb7c9cf3983       46cc66ccc7c19                                                                                         9 minutes ago       Running             kube-controller-manager   1                   20ad2afb69ba0       kube-controller-manager-old-k8s-version-062731
	5a2886e8d0f34       9cdd6470f48c8                                                                                         9 minutes ago       Running             etcd                      1                   9b30aedda13a5       etcd-old-k8s-version-062731
	a92699fef46e7       762dce4090c5f                                                                                         9 minutes ago       Running             kube-scheduler            1                   fe636016daf88       kube-scheduler-old-k8s-version-062731
	4c82d04b6c3a7       00543d2fe5d71                                                                                         9 minutes ago       Running             kube-apiserver            1                   2eb7cfdc2d5f6       kube-apiserver-old-k8s-version-062731
	bb666f6a8daba       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Exited              busybox                   0                   dc48916618137       busybox
	0bebd4d0f0d74       940f54a5bcae9                                                                                         10 minutes ago      Exited              kube-proxy                0                   4572067f55e02       kube-proxy-lb4zs
	7dc4aaf0f43a8       97e04611ad434                                                                                         10 minutes ago      Exited              coredns                   0                   c3aba0235cdac       coredns-5dd5756b68-pld27
	a0ace307b5dab       9cdd6470f48c8                                                                                         10 minutes ago      Exited              etcd                      0                   f8063aea5ea3f       etcd-old-k8s-version-062731
	1eb33dcdfff48       46cc66ccc7c19                                                                                         10 minutes ago      Exited              kube-controller-manager   0                   e9f30fee80eeb       kube-controller-manager-old-k8s-version-062731
	d19b472de2d44       762dce4090c5f                                                                                         10 minutes ago      Exited              kube-scheduler            0                   8bf13620d0efe       kube-scheduler-old-k8s-version-062731
	9103484f3ae11       00543d2fe5d71                                                                                         10 minutes ago      Exited              kube-apiserver            0                   5f278e55346c6       kube-apiserver-old-k8s-version-062731
	
	
	==> coredns [7c2d1182a5bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44160 - 50023 "HINFO IN 4089803307241079152.5277922079326627374. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025726781s
	
	
	==> coredns [7dc4aaf0f43a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57448 - 38097 "HINFO IN 1055238920401735314.5747428167574435741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022753011s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-062731
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-062731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=old-k8s-version-062731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_12_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:11:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-062731
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:22:36 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:18:32 +0000   Mon, 29 Sep 2025 14:11:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:18:32 +0000   Mon, 29 Sep 2025 14:11:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:18:32 +0000   Mon, 29 Sep 2025 14:11:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:18:32 +0000   Mon, 29 Sep 2025 14:12:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-062731
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 00b3587badb34185a9c0f5e1a840ae3c
	  System UUID:                fb2a2127-d734-4ef5-84b1-07fd32e62650
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-5dd5756b68-pld27                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-old-k8s-version-062731                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kube-apiserver-old-k8s-version-062731             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-old-k8s-version-062731    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-lb4zs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-old-k8s-version-062731             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-57f55c9bc5-fs4wn                   100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         9m44s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jmjhf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m4s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2srlk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  Starting                 9m14s                  kube-proxy       
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node old-k8s-version-062731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node old-k8s-version-062731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             10m                    kubelet          Node old-k8s-version-062731 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                10m                    kubelet          Node old-k8s-version-062731 status is now: NodeReady
	  Normal  RegisteredNode           10m                    node-controller  Node old-k8s-version-062731 event: Registered Node old-k8s-version-062731 in Controller
	  Normal  Starting                 9m24s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m23s (x8 over 9m23s)  kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m23s (x8 over 9m23s)  kubelet          Node old-k8s-version-062731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s (x7 over 9m23s)  kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m4s                   node-controller  Node old-k8s-version-062731 event: Registered Node old-k8s-version-062731 in Controller
	
	
	==> dmesg <==
	[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5a2886e8d0f3] <==
	{"level":"info","ts":"2025-09-29T14:13:20.970054Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-29T14:13:20.970064Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-09-29T14:13:20.970409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed switched to configuration voters=(11459225503572592365)"}
	{"level":"info","ts":"2025-09-29T14:13:20.970459Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","added-peer-id":"9f0758e1c58a86ed","added-peer-peer-urls":["https://192.168.85.2:2380"]}
	{"level":"info","ts":"2025-09-29T14:13:20.970538Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:13:20.970562Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:13:20.98519Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T14:13:20.98545Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T14:13:20.985474Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T14:13:20.985519Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:13:20.985526Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:13:22.702576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T14:13:22.702838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T14:13:22.702984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T14:13:22.703076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.703184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.703265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.703361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.708492Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-062731 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T14:13:22.708731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:13:22.710019Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T14:13:22.708753Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:13:22.711077Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T14:13:22.740542Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T14:13:22.740586Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [a0ace307b5da] <==
	{"level":"info","ts":"2025-09-29T14:11:53.527847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.527939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.528068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.528157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.529572Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-062731 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T14:11:53.529755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:11:53.532589Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:11:53.533852Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T14:11:53.534112Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:11:53.532739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T14:11:53.539614Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:11:53.573537Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:11:53.54019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T14:11:53.606342Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T14:11:53.595857Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:12:59.336607Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:12:59.336684Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-062731","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"warn","ts":"2025-09-29T14:12:59.336775Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:12:59.336846Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:12:59.423616Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:12:59.423731Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T14:12:59.423767Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-09-29T14:12:59.426078Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:12:59.426157Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:12:59.426166Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-062731","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:22:42 up  6:05,  0 users,  load average: 0.40, 1.54, 3.25
	Linux old-k8s-version-062731 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [4c82d04b6c3a] <==
	E0929 14:20:25.789146       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:20:35.789863       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:20:45.791271       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:20:55.792072       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:21:05.792627       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:21:15.793805       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	I0929 14:21:25.523812       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.142.168:443: connect: connection refused
	I0929 14:21:25.523847       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 14:21:25.795050       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	W0929 14:21:26.728126       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 14:21:26.728227       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 14:21:26.728241       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:21:26.728300       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 14:21:26.728343       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 14:21:26.730308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 14:21:35.795744       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:21:45.796059       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:21:55.796939       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:22:05.797874       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:22:15.798219       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	I0929 14:22:25.524212       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.142.168:443: connect: connection refused
	I0929 14:22:25.524245       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 14:22:25.798629       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["exempt","global-default","leader-election","node-high","system","workload-high","workload-low","catch-all"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:22:35.799125       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-apiserver [9103484f3ae1] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:09.379920       1 logging.go:59] [core] [Channel #69 SubChannel #71] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:09.434680       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:09.466823       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1eb33dcdfff4] <==
	I0929 14:12:14.182836       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-z6bfh"
	I0929 14:12:14.239311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="479.636948ms"
	I0929 14:12:14.260139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.778113ms"
	I0929 14:12:14.260246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.894µs"
	I0929 14:12:14.260347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.635µs"
	I0929 14:12:14.299098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.572µs"
	I0929 14:12:16.798523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.239µs"
	I0929 14:12:17.416232       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0929 14:12:17.472987       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-z6bfh"
	I0929 14:12:17.512248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.781191ms"
	I0929 14:12:17.540935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.634664ms"
	I0929 14:12:17.542175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.006µs"
	I0929 14:12:17.854446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.128µs"
	I0929 14:12:26.971023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.592µs"
	I0929 14:12:27.035345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.569µs"
	I0929 14:12:27.336232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.105µs"
	I0929 14:12:27.337674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.003µs"
	I0929 14:12:45.202792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.578211ms"
	I0929 14:12:45.206272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="943.739µs"
	I0929 14:12:58.488901       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0929 14:12:58.547637       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-fs4wn"
	I0929 14:12:58.659940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="172.089724ms"
	I0929 14:12:58.714537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="54.347242ms"
	I0929 14:12:58.779199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="64.432346ms"
	I0929 14:12:58.779532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="87.23µs"
	
	
	==> kube-controller-manager [78bb7c9cf398] <==
	I0929 14:18:08.703080       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:18:38.130358       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:18:38.711834       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:19:08.135596       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:19:08.719624       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 14:19:21.013053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="75.857µs"
	I0929 14:19:21.035007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="80.789µs"
	I0929 14:19:32.010573       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="108.162µs"
	I0929 14:19:36.016957       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="83.086µs"
	E0929 14:19:38.140983       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:19:38.727444       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 14:19:47.018852       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="89.421µs"
	I0929 14:20:01.016089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="81.864µs"
	E0929 14:20:08.146357       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:20:08.745929       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:20:38.151731       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:20:38.754152       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:21:08.158017       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:21:08.761469       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:21:38.164264       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:21:38.769237       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:22:08.169523       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:22:08.777593       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:22:38.174581       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:22:38.785246       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0bebd4d0f0d7] <==
	I0929 14:12:17.072304       1 server_others.go:69] "Using iptables proxy"
	I0929 14:12:17.094715       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 14:12:17.206485       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:12:17.216258       1 server_others.go:152] "Using iptables Proxier"
	I0929 14:12:17.216479       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 14:12:17.216576       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 14:12:17.216688       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 14:12:17.217019       1 server.go:846] "Version info" version="v1.28.0"
	I0929 14:12:17.217379       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:12:17.256978       1 config.go:188] "Starting service config controller"
	I0929 14:12:17.257036       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 14:12:17.257076       1 config.go:97] "Starting endpoint slice config controller"
	I0929 14:12:17.257080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 14:12:17.259590       1 config.go:315] "Starting node config controller"
	I0929 14:12:17.259719       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 14:12:17.357940       1 shared_informer.go:318] Caches are synced for service config
	I0929 14:12:17.358034       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 14:12:17.362325       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [43868fe5fc27] <==
	I0929 14:13:27.520972       1 server_others.go:69] "Using iptables proxy"
	I0929 14:13:27.540237       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 14:13:27.580049       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:13:27.582293       1 server_others.go:152] "Using iptables Proxier"
	I0929 14:13:27.582332       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 14:13:27.582340       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 14:13:27.582368       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 14:13:27.582575       1 server.go:846] "Version info" version="v1.28.0"
	I0929 14:13:27.582585       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:13:27.583541       1 config.go:188] "Starting service config controller"
	I0929 14:13:27.583567       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 14:13:27.583586       1 config.go:97] "Starting endpoint slice config controller"
	I0929 14:13:27.583590       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 14:13:27.586326       1 config.go:315] "Starting node config controller"
	I0929 14:13:27.586342       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 14:13:27.688937       1 shared_informer.go:318] Caches are synced for node config
	I0929 14:13:27.688986       1 shared_informer.go:318] Caches are synced for service config
	I0929 14:13:27.689022       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a92699fef46e] <==
	I0929 14:13:22.289701       1 serving.go:348] Generated self-signed cert in-memory
	W0929 14:13:25.597194       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 14:13:25.597297       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:13:25.597327       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 14:13:25.597367       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 14:13:25.694356       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 14:13:25.694602       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:13:25.699662       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 14:13:25.702604       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:13:25.702828       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 14:13:25.703029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0929 14:13:25.741824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0929 14:13:25.742102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0929 14:13:25.820273       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d19b472de2d4] <==
	W0929 14:11:58.975590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 14:11:58.975709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0929 14:11:58.975849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0929 14:11:58.975881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0929 14:11:58.976022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.976043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.976119       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0929 14:11:58.976137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0929 14:11:58.976214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 14:11:58.976232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0929 14:11:58.978006       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0929 14:11:58.978038       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:11:58.978329       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.978354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.978444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.978462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.978542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.978577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.978683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 14:11:58.978702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0929 14:12:00.070915       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 14:12:59.549472       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0929 14:12:59.549887       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0929 14:12:59.550118       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0929 14:12:59.550978       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 14:20:53 old-k8s-version-062731 kubelet[1397]: E0929 14:20:53.995025    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:20:53 old-k8s-version-062731 kubelet[1397]: E0929 14:20:53.995838    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:20:57 old-k8s-version-062731 kubelet[1397]: E0929 14:20:57.994831    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:21:04 old-k8s-version-062731 kubelet[1397]: E0929 14:21:04.995701    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:21:05 old-k8s-version-062731 kubelet[1397]: E0929 14:21:05.995050    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:21:11 old-k8s-version-062731 kubelet[1397]: E0929 14:21:11.995724    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:21:18 old-k8s-version-062731 kubelet[1397]: E0929 14:21:18.995682    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:21:18 old-k8s-version-062731 kubelet[1397]: E0929 14:21:18.999150    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:21:25 old-k8s-version-062731 kubelet[1397]: E0929 14:21:25.995245    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:21:33 old-k8s-version-062731 kubelet[1397]: E0929 14:21:33.995462    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:21:33 old-k8s-version-062731 kubelet[1397]: E0929 14:21:33.995800    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:21:40 old-k8s-version-062731 kubelet[1397]: E0929 14:21:40.995192    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:21:45 old-k8s-version-062731 kubelet[1397]: E0929 14:21:45.996876    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:21:48 old-k8s-version-062731 kubelet[1397]: E0929 14:21:48.999012    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:21:53 old-k8s-version-062731 kubelet[1397]: E0929 14:21:53.995288    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:21:58 old-k8s-version-062731 kubelet[1397]: E0929 14:21:58.997480    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:22:03 old-k8s-version-062731 kubelet[1397]: E0929 14:22:03.995166    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:22:04 old-k8s-version-062731 kubelet[1397]: E0929 14:22:04.997347    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:22:12 old-k8s-version-062731 kubelet[1397]: E0929 14:22:12.997267    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:22:15 old-k8s-version-062731 kubelet[1397]: E0929 14:22:15.994478    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:22:18 old-k8s-version-062731 kubelet[1397]: E0929 14:22:18.997656    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:22:23 old-k8s-version-062731 kubelet[1397]: E0929 14:22:23.994920    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:22:29 old-k8s-version-062731 kubelet[1397]: E0929 14:22:29.995248    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:22:33 old-k8s-version-062731 kubelet[1397]: E0929 14:22:33.994685    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:22:35 old-k8s-version-062731 kubelet[1397]: E0929 14:22:35.994648    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	
	
	==> storage-provisioner [4de42700ff46] <==
	I0929 14:13:27.860119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:13:57.868123       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [67ee15eacad4] <==
	I0929 14:14:13.188873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 14:14:13.209840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 14:14:13.209918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 14:14:30.627249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 14:14:30.627670       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-062731_7e7ea937-39f1-4124-8351-bb9fa1f395c7!
	I0929 14:14:30.627403       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"484ee430-38ff-40b6-a402-1d5e1b0d6e78", APIVersion:"v1", ResourceVersion:"737", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-062731_7e7ea937-39f1-4124-8351-bb9fa1f395c7 became leader
	I0929 14:14:30.728290       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-062731_7e7ea937-39f1-4124-8351-bb9fa1f395c7!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-062731 -n old-k8s-version-062731
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-062731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-062731 describe pod metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-062731 describe pod metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk: exit status 1 (91.886603ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fs4wn" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-jmjhf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-2srlk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-062731 describe pod metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.25s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0929 14:15:01.960270 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kpkl2" [80983d01-da8e-4456-bdd9-c6b9c062762d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 14:15:02.602956 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:03.684122 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:03.884817 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:06.447106 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:11.568726 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:20.566945 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:21.810827 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:24.298177 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:31.385279 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:38.295975 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:42.292247 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:52.916115 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:52.922403 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:52.933644 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:52.954980 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:52.996391 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:53.077802 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:53.239294 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:53.561312 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:54.203594 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:55.484999 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:58.047066 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:03.168937 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:13.410285 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:23.254725 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:33.891919 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:46.219548 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.256671 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.263147 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.274521 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.296060 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.337437 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.418793 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.580440 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:53.902093 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:54.543645 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:55.825947 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:16:58.388274 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:00.342754 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:01.372763 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:03.456367 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:03.509949 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:13.752222 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:14.854130 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:31.157326 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:34.234353 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:45.177389 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:17:50.245337 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:18:15.196247 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:18:36.776129 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:18:59.883461 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:19:02.358080 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:19:30.061559 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:19:37.118092 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:20:01.311650 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:20:03.684744 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:20:20.566142 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:20:29.019217 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:20:38.296480 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:20:52.916036 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:21:20.617750 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:21:43.638955 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:21:53.256752 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:22:00.341273 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:22:03.456623 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:22:20.960346 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-983174 -n no-preload-983174
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:24:02.303453864 +0000 UTC m=+4935.568667234
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-983174 describe po kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard
E0929 14:24:02.357981 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) kubectl --context no-preload-983174 describe po kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-kpkl2
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-983174/192.168.76.2
Start Time:       Mon, 29 Sep 2025 14:14:30 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f8dx6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-f8dx6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m32s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2 to no-preload-983174
Normal   Pulling    6m37s (x5 over 9m32s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m37s (x5 over 9m31s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m37s (x5 over 9m31s)   kubelet            Error: ErrImagePull
Warning  Failed     4m27s (x20 over 9m31s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m13s (x21 over 9m31s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context no-preload-983174 logs kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context no-preload-983174 logs kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard: exit status 1 (102.952586ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-kpkl2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context no-preload-983174 logs kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-983174
helpers_test.go:243: (dbg) docker inspect no-preload-983174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d",
	        "Created": "2025-09-29T14:12:28.585280253Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1556794,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:14:09.708801143Z",
	            "FinishedAt": "2025-09-29T14:14:08.873362901Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/hosts",
	        "LogPath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d-json.log",
	        "Name": "/no-preload-983174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-983174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-983174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d",
	                "LowerDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-983174",
	                "Source": "/var/lib/docker/volumes/no-preload-983174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-983174",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-983174",
	                "name.minikube.sigs.k8s.io": "no-preload-983174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2bb087298b729fe50b6b8b6349476b95b71940799a5347c1d150f1268cad335",
	            "SandboxKey": "/var/run/docker/netns/a2bb087298b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34291"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34292"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34295"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34293"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34294"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-983174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d3:56:45:98:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c8b107e545669f34bd8328f74f0d3a601475a7ffdc4b152c45ea58429e814854",
	                    "EndpointID": "77d1452714aefd40dff3a851f99aacaf7f24c13581907fb53a55aac0a5146483",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-983174",
	                        "f588a9dd031f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-983174 -n no-preload-983174
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-983174 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-983174 logs -n 25: (1.346812934s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p kubenet-212797 sudo docker system info                                                                                                                                                                                                       │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                 │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                           │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cri-dockerd --version                                                                                                                                                                                                    │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat containerd --no-pager                                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /etc/containerd/config.toml                                                                                                                                                                                          │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo containerd config dump                                                                                                                                                                                                   │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │                     │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat crio --no-pager                                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                  │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo crio config                                                                                                                                                                                                              │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ delete  │ -p kubenet-212797                                                                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-062731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ stop    │ -p old-k8s-version-062731 --alsologtostderr -v=3                                                                                                                                                                                                │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-062731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ start   │ -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-983174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ stop    │ -p no-preload-983174 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:14 UTC │
	│ addons  │ enable dashboard -p no-preload-983174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:14 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:15 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:14:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:14:09.446915 1556666 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:14:09.447165 1556666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:14:09.447200 1556666 out.go:374] Setting ErrFile to fd 2...
	I0929 14:14:09.447220 1556666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:14:09.447495 1556666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:14:09.447946 1556666 out.go:368] Setting JSON to false
	I0929 14:14:09.449072 1556666 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":21402,"bootTime":1759133848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:14:09.449209 1556666 start.go:140] virtualization:  
	I0929 14:14:09.452257 1556666 out.go:179] * [no-preload-983174] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:14:09.456099 1556666 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:14:09.456265 1556666 notify.go:220] Checking for updates...
	I0929 14:14:09.459654 1556666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:14:09.462628 1556666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:09.465578 1556666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:14:09.468487 1556666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:14:09.471340 1556666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:14:09.474663 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:09.475308 1556666 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:14:09.502198 1556666 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:14:09.502336 1556666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:14:09.561225 1556666 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:14:09.551094641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:14:09.561332 1556666 docker.go:318] overlay module found
	I0929 14:14:09.566299 1556666 out.go:179] * Using the docker driver based on existing profile
	I0929 14:14:09.569150 1556666 start.go:304] selected driver: docker
	I0929 14:14:09.569168 1556666 start.go:924] validating driver "docker" against &{Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:09.569285 1556666 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:14:09.570017 1556666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:14:09.624942 1556666 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:14:09.615982942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:14:09.625279 1556666 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:14:09.625316 1556666 cni.go:84] Creating CNI manager for ""
	I0929 14:14:09.625393 1556666 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:14:09.625438 1556666 start.go:348] cluster config:
	{Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:09.628718 1556666 out.go:179] * Starting "no-preload-983174" primary control-plane node in "no-preload-983174" cluster
	I0929 14:14:09.631576 1556666 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:14:09.634419 1556666 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:14:09.637280 1556666 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:14:09.637361 1556666 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:14:09.637432 1556666 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/config.json ...
	I0929 14:14:09.637750 1556666 cache.go:107] acquiring lock: {Name:mkbf722085a8c6cd247df0776d9bc514bf99781b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.637851 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 14:14:09.637923 1556666 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 177.364µs
	I0929 14:14:09.637940 1556666 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 14:14:09.637955 1556666 cache.go:107] acquiring lock: {Name:mk30f19321bc3b42d291063dc85a66705246f7e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638002 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 14:14:09.638013 1556666 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0" took 60.275µs
	I0929 14:14:09.638030 1556666 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 14:14:09.638041 1556666 cache.go:107] acquiring lock: {Name:mk2f793d2d4a07e670fda7f22f83aeba125cecc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638080 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 14:14:09.638089 1556666 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0" took 49.937µs
	I0929 14:14:09.638096 1556666 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 14:14:09.638106 1556666 cache.go:107] acquiring lock: {Name:mkc74eaa586dd62e4e7bb32f19e0778bae528158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638136 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 14:14:09.638144 1556666 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0" took 39.968µs
	I0929 14:14:09.638151 1556666 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 14:14:09.638160 1556666 cache.go:107] acquiring lock: {Name:mk3285eeb8c57d45d5a563781eb999cc08d9baf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638189 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 14:14:09.638197 1556666 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0" took 39.114µs
	I0929 14:14:09.638204 1556666 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 14:14:09.638213 1556666 cache.go:107] acquiring lock: {Name:mkbc5650bf66f5bda3f443eba33f59d2953325c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638242 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0929 14:14:09.638251 1556666 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.09µs
	I0929 14:14:09.638257 1556666 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 14:14:09.638266 1556666 cache.go:107] acquiring lock: {Name:mk1e873b26d63631af61d7ed1e9134ed28465b53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638295 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 14:14:09.638304 1556666 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.049µs
	I0929 14:14:09.638310 1556666 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 14:14:09.638330 1556666 cache.go:107] acquiring lock: {Name:mk303304602324c8e2b92b82ec131997d8ec523d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638360 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 14:14:09.638369 1556666 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 46.787µs
	I0929 14:14:09.638375 1556666 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 14:14:09.638381 1556666 cache.go:87] Successfully saved all images to host disk.
	I0929 14:14:09.656713 1556666 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:14:09.656737 1556666 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:14:09.656754 1556666 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:14:09.656776 1556666 start.go:360] acquireMachinesLock for no-preload-983174: {Name:mke1e7fc5da9d04523b73b29b2664621e2ac37f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.656829 1556666 start.go:364] duration metric: took 38.516µs to acquireMachinesLock for "no-preload-983174"
	I0929 14:14:09.656855 1556666 start.go:96] Skipping create...Using existing machine configuration
	I0929 14:14:09.656864 1556666 fix.go:54] fixHost starting: 
	I0929 14:14:09.657131 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:09.674254 1556666 fix.go:112] recreateIfNeeded on no-preload-983174: state=Stopped err=<nil>
	W0929 14:14:09.674292 1556666 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 14:14:09.677584 1556666 out.go:252] * Restarting existing docker container for "no-preload-983174" ...
	I0929 14:14:09.677673 1556666 cli_runner.go:164] Run: docker start no-preload-983174
	I0929 14:14:09.938637 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:09.957785 1556666 kic.go:430] container "no-preload-983174" state is running.
	I0929 14:14:09.959526 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:09.982824 1556666 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/config.json ...
	I0929 14:14:09.983047 1556666 machine.go:93] provisionDockerMachine start ...
	I0929 14:14:09.983106 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:10.007116 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:10.007471 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:10.007482 1556666 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:14:10.008211 1556666 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54734->127.0.0.1:34291: read: connection reset by peer
	I0929 14:14:13.168318 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-983174
	
	I0929 14:14:13.168401 1556666 ubuntu.go:182] provisioning hostname "no-preload-983174"
	I0929 14:14:13.168478 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:13.191152 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:13.191553 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:13.191572 1556666 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-983174 && echo "no-preload-983174" | sudo tee /etc/hostname
	I0929 14:14:13.354864 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-983174
	
	I0929 14:14:13.354956 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:13.373283 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:13.373591 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:13.373619 1556666 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-983174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-983174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-983174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:14:13.520952 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:14:13.520977 1556666 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:14:13.520994 1556666 ubuntu.go:190] setting up certificates
	I0929 14:14:13.521004 1556666 provision.go:84] configureAuth start
	I0929 14:14:13.521063 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:13.538844 1556666 provision.go:143] copyHostCerts
	I0929 14:14:13.538915 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:14:13.538938 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:14:13.539019 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:14:13.539171 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:14:13.539183 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:14:13.539212 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:14:13.539284 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:14:13.539295 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:14:13.539321 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:14:13.539380 1556666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.no-preload-983174 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-983174]
	I0929 14:14:14.175612 1556666 provision.go:177] copyRemoteCerts
	I0929 14:14:14.175688 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:14:14.175734 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.193690 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:14.293882 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 14:14:14.318335 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 14:14:14.344180 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:14:14.369452 1556666 provision.go:87] duration metric: took 848.423896ms to configureAuth
	I0929 14:14:14.369478 1556666 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:14:14.369677 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:14.369735 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.387401 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.387709 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.387723 1556666 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:14:14.529052 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:14:14.529074 1556666 ubuntu.go:71] root file system type: overlay
	I0929 14:14:14.529186 1556666 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:14:14.529255 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.547682 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.547997 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.548083 1556666 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:14:14.705061 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:14:14.705158 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.723963 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.724277 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.724302 1556666 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:14:14.871746 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:14:14.871808 1556666 machine.go:96] duration metric: took 4.888752094s to provisionDockerMachine
	I0929 14:14:14.871835 1556666 start.go:293] postStartSetup for "no-preload-983174" (driver="docker")
	I0929 14:14:14.871865 1556666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:14:14.871951 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:14:14.872027 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.889467 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:14.990105 1556666 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:14:14.993594 1556666 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:14:14.993625 1556666 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:14:14.993636 1556666 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:14:14.993642 1556666 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:14:14.993655 1556666 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:14:14.993707 1556666 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:14:14.993801 1556666 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:14:14.993924 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:14:15.010275 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:14:15.041050 1556666 start.go:296] duration metric: took 169.180506ms for postStartSetup
	I0929 14:14:15.041206 1556666 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:14:15.041284 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.059737 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.157816 1556666 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:14:15.162824 1556666 fix.go:56] duration metric: took 5.505952464s for fixHost
	I0929 14:14:15.162849 1556666 start.go:83] releasing machines lock for "no-preload-983174", held for 5.506005527s
	I0929 14:14:15.162917 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:15.180675 1556666 ssh_runner.go:195] Run: cat /version.json
	I0929 14:14:15.180722 1556666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:14:15.180777 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.180726 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.198974 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.200600 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.292199 1556666 ssh_runner.go:195] Run: systemctl --version
	I0929 14:14:15.427571 1556666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:14:15.431914 1556666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:14:15.452046 1556666 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:14:15.452120 1556666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:14:15.461413 1556666 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 14:14:15.461440 1556666 start.go:495] detecting cgroup driver to use...
	I0929 14:14:15.461473 1556666 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:14:15.461565 1556666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:14:15.477405 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:14:15.489101 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:14:15.499317 1556666 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:14:15.499406 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:14:15.512856 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:14:15.522949 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:14:15.533163 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:14:15.543072 1556666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:14:15.552630 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:14:15.563081 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:14:15.573609 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:14:15.583981 1556666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:14:15.593828 1556666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:14:15.602598 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:15.696246 1556666 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:14:15.784831 1556666 start.go:495] detecting cgroup driver to use...
	I0929 14:14:15.784911 1556666 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:14:15.784990 1556666 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:14:15.799531 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:14:15.815605 1556666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:14:15.840157 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:14:15.852831 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:14:15.865897 1556666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:14:15.883856 1556666 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:14:15.887405 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:14:15.896336 1556666 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:14:15.915875 1556666 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:14:16.027307 1556666 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:14:16.115830 1556666 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:14:16.116008 1556666 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:14:16.139611 1556666 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:14:16.151714 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:16.249049 1556666 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:14:16.778694 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:14:16.790316 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:14:16.802021 1556666 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 14:14:16.815179 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:14:16.827094 1556666 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:14:16.928082 1556666 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:14:17.034122 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.145418 1556666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:14:17.161368 1556666 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:14:17.174566 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.275531 1556666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:14:17.385986 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:14:17.400398 1556666 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:14:17.400473 1556666 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:14:17.404874 1556666 start.go:563] Will wait 60s for crictl version
	I0929 14:14:17.404984 1556666 ssh_runner.go:195] Run: which crictl
	I0929 14:14:17.408474 1556666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:14:17.529650 1556666 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:14:17.529725 1556666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:14:17.554294 1556666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:14:17.585653 1556666 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:14:17.585796 1556666 cli_runner.go:164] Run: docker network inspect no-preload-983174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:14:17.607543 1556666 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:14:17.611447 1556666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:14:17.622262 1556666 kubeadm.go:875] updating cluster {Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:14:17.622371 1556666 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:14:17.622426 1556666 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:14:17.641266 1556666 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:14:17.641291 1556666 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:14:17.641301 1556666 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 docker true true} ...
	I0929 14:14:17.641412 1556666 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-983174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:14:17.641479 1556666 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:14:17.706589 1556666 cni.go:84] Creating CNI manager for ""
	I0929 14:14:17.706614 1556666 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:14:17.706628 1556666 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:14:17.706649 1556666 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-983174 NodeName:no-preload-983174 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:14:17.706779 1556666 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-983174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:14:17.706850 1556666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:14:17.715757 1556666 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:14:17.715829 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:14:17.724341 1556666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 14:14:17.742721 1556666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:14:17.761018 1556666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0929 14:14:17.780275 1556666 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:14:17.783823 1556666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:14:17.794621 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.897706 1556666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:14:17.912470 1556666 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174 for IP: 192.168.76.2
	I0929 14:14:17.912492 1556666 certs.go:194] generating shared ca certs ...
	I0929 14:14:17.912534 1556666 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:17.912697 1556666 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:14:17.912749 1556666 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:14:17.912761 1556666 certs.go:256] generating profile certs ...
	I0929 14:14:17.912856 1556666 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.key
	I0929 14:14:17.912930 1556666 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.key.8135a500
	I0929 14:14:17.912982 1556666 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.key
	I0929 14:14:17.913106 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:14:17.913160 1556666 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:14:17.913173 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:14:17.913206 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:14:17.913232 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:14:17.913261 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:14:17.913318 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:14:17.913997 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:14:17.956896 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:14:17.985873 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:14:18.028989 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:14:18.063448 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 14:14:18.096280 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 14:14:18.147356 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:14:18.179221 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:14:18.209546 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:14:18.242132 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:14:18.273433 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:14:18.303036 1556666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:14:18.322286 1556666 ssh_runner.go:195] Run: openssl version
	I0929 14:14:18.327639 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:14:18.342520 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.346354 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.346432 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.353769 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:14:18.362808 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:14:18.372034 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.375576 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.375643 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.382977 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:14:18.392026 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:14:18.402458 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.405833 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.405908 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.412741 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:14:18.421756 1556666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:14:18.425436 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 14:14:18.432235 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 14:14:18.439307 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 14:14:18.446668 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 14:14:18.453723 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 14:14:18.460904 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 14:14:18.467893 1556666 kubeadm.go:392] StartCluster: {Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:18.468068 1556666 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:14:18.485585 1556666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:14:18.497293 1556666 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 14:14:18.497323 1556666 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 14:14:18.497382 1556666 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 14:14:18.506278 1556666 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 14:14:18.506918 1556666 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-983174" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:18.507237 1556666 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1125775/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-983174" cluster setting kubeconfig missing "no-preload-983174" context setting]
	I0929 14:14:18.507707 1556666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.509252 1556666 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 14:14:18.517799 1556666 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 14:14:18.517889 1556666 kubeadm.go:593] duration metric: took 20.559326ms to restartPrimaryControlPlane
	I0929 14:14:18.517914 1556666 kubeadm.go:394] duration metric: took 50.028401ms to StartCluster
	I0929 14:14:18.517962 1556666 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.518060 1556666 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:18.519066 1556666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.519359 1556666 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:14:18.519673 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:18.519746 1556666 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:14:18.519847 1556666 addons.go:69] Setting storage-provisioner=true in profile "no-preload-983174"
	I0929 14:14:18.519867 1556666 addons.go:238] Setting addon storage-provisioner=true in "no-preload-983174"
	W0929 14:14:18.519877 1556666 addons.go:247] addon storage-provisioner should already be in state true
	I0929 14:14:18.519854 1556666 addons.go:69] Setting default-storageclass=true in profile "no-preload-983174"
	I0929 14:14:18.519904 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.519920 1556666 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-983174"
	I0929 14:14:18.520315 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.520394 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.520954 1556666 addons.go:69] Setting metrics-server=true in profile "no-preload-983174"
	I0929 14:14:18.520978 1556666 addons.go:238] Setting addon metrics-server=true in "no-preload-983174"
	W0929 14:14:18.520986 1556666 addons.go:247] addon metrics-server should already be in state true
	I0929 14:14:18.521025 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.521459 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.524831 1556666 addons.go:69] Setting dashboard=true in profile "no-preload-983174"
	I0929 14:14:18.524862 1556666 addons.go:238] Setting addon dashboard=true in "no-preload-983174"
	W0929 14:14:18.524872 1556666 addons.go:247] addon dashboard should already be in state true
	I0929 14:14:18.524910 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.525477 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.526051 1556666 out.go:179] * Verifying Kubernetes components...
	I0929 14:14:18.530866 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:18.567077 1556666 addons.go:238] Setting addon default-storageclass=true in "no-preload-983174"
	W0929 14:14:18.567104 1556666 addons.go:247] addon default-storageclass should already be in state true
	I0929 14:14:18.567131 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.567570 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.581520 1556666 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:14:18.584559 1556666 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:14:18.584588 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:14:18.584654 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.593397 1556666 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 14:14:18.593473 1556666 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 14:14:18.597233 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 14:14:18.597259 1556666 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 14:14:18.597325 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.603277 1556666 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 14:14:18.607154 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 14:14:18.607180 1556666 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 14:14:18.607257 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.629305 1556666 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:18.629327 1556666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:14:18.629390 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.668010 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.668341 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.688724 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.701428 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.731581 1556666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:14:18.818506 1556666 node_ready.go:35] waiting up to 6m0s for node "no-preload-983174" to be "Ready" ...
	I0929 14:14:18.852403 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 14:14:18.852425 1556666 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 14:14:18.898176 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 14:14:18.898249 1556666 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 14:14:18.910910 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 14:14:18.910979 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 14:14:18.947482 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:14:18.978364 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 14:14:18.978391 1556666 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 14:14:19.033790 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:19.033863 1556666 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 14:14:19.075517 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 14:14:19.075595 1556666 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 14:14:19.079301 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:19.154767 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:19.219263 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 14:14:19.219348 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 14:14:19.420384 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 14:14:19.420459 1556666 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 14:14:19.739905 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 14:14:19.739987 1556666 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 14:14:19.746335 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.746436 1556666 retry.go:31] will retry after 131.359244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 14:14:19.768963 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.769045 1556666 retry.go:31] will retry after 340.512991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.792479 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 14:14:19.792677 1556666 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 14:14:19.878811 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0929 14:14:19.892912 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.892989 1556666 retry.go:31] will retry after 313.861329ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.937588 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 14:14:19.937617 1556666 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 14:14:19.997110 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:14:19.997138 1556666 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 14:14:20.026643 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:14:20.110232 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:20.207752 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:24.291836 1556666 node_ready.go:49] node "no-preload-983174" is "Ready"
	I0929 14:14:24.291865 1556666 node_ready.go:38] duration metric: took 5.473270305s for node "no-preload-983174" to be "Ready" ...
	I0929 14:14:24.291882 1556666 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:14:24.291942 1556666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:14:26.299144 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420249689s)
	I0929 14:14:26.299258 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.272581766s)
	I0929 14:14:26.299391 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.189124116s)
	I0929 14:14:26.302416 1556666 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-983174 addons enable metrics-server
	
	I0929 14:14:26.411012 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.203209897s)
	I0929 14:14:26.411051 1556666 addons.go:479] Verifying addon metrics-server=true in "no-preload-983174"
	I0929 14:14:26.411223 1556666 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.119270583s)
	I0929 14:14:26.411237 1556666 api_server.go:72] duration metric: took 7.891819741s to wait for apiserver process to appear ...
	I0929 14:14:26.411242 1556666 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:14:26.411258 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:26.415317 1556666 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
	I0929 14:14:26.418321 1556666 addons.go:514] duration metric: took 7.898562435s for enable addons: enabled=[storage-provisioner dashboard default-storageclass metrics-server]
	I0929 14:14:26.422832 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:14:26.422855 1556666 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:14:26.911383 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:26.926872 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:14:26.926902 1556666 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:14:27.412099 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:27.421007 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 14:14:27.422346 1556666 api_server.go:141] control plane version: v1.34.0
	I0929 14:14:27.422373 1556666 api_server.go:131] duration metric: took 1.011125009s to wait for apiserver health ...
	I0929 14:14:27.422383 1556666 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:14:27.428732 1556666 system_pods.go:59] 8 kube-system pods found
	I0929 14:14:27.428777 1556666 system_pods.go:61] "coredns-66bc5c9577-846n7" [dd192e93-efcd-416c-b3f2-c56860e96667] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:14:27.428786 1556666 system_pods.go:61] "etcd-no-preload-983174" [5aa66d56-4e0b-426f-af8c-880f7e3c02db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:14:27.428794 1556666 system_pods.go:61] "kube-apiserver-no-preload-983174" [e9e9910a-f91a-40e2-8152-50c95dc16563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:14:27.428801 1556666 system_pods.go:61] "kube-controller-manager-no-preload-983174" [4cdb0775-7e84-4c1c-90b6-a8d68514159c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:14:27.428829 1556666 system_pods.go:61] "kube-proxy-rjpsv" [640460b1-abcd-4490-a152-ceb13067ffb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:14:27.428851 1556666 system_pods.go:61] "kube-scheduler-no-preload-983174" [5fb52905-6a97-4feb-bc63-6a67be970b9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:14:27.428865 1556666 system_pods.go:61] "metrics-server-746fcd58dc-6pt8w" [db3c374a-7d3e-4ebd-9a71-c1245d62d2ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:14:27.428873 1556666 system_pods.go:61] "storage-provisioner" [3e67c2e9-9826-4557-b747-fec5992144f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:14:27.428883 1556666 system_pods.go:74] duration metric: took 6.494789ms to wait for pod list to return data ...
	I0929 14:14:27.428904 1556666 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:14:27.431458 1556666 default_sa.go:45] found service account: "default"
	I0929 14:14:27.431530 1556666 default_sa.go:55] duration metric: took 2.610441ms for default service account to be created ...
	I0929 14:14:27.431555 1556666 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:14:27.527907 1556666 system_pods.go:86] 8 kube-system pods found
	I0929 14:14:27.527993 1556666 system_pods.go:89] "coredns-66bc5c9577-846n7" [dd192e93-efcd-416c-b3f2-c56860e96667] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:14:27.528017 1556666 system_pods.go:89] "etcd-no-preload-983174" [5aa66d56-4e0b-426f-af8c-880f7e3c02db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:14:27.528052 1556666 system_pods.go:89] "kube-apiserver-no-preload-983174" [e9e9910a-f91a-40e2-8152-50c95dc16563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:14:27.528078 1556666 system_pods.go:89] "kube-controller-manager-no-preload-983174" [4cdb0775-7e84-4c1c-90b6-a8d68514159c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:14:27.528099 1556666 system_pods.go:89] "kube-proxy-rjpsv" [640460b1-abcd-4490-a152-ceb13067ffb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:14:27.528119 1556666 system_pods.go:89] "kube-scheduler-no-preload-983174" [5fb52905-6a97-4feb-bc63-6a67be970b9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:14:27.528137 1556666 system_pods.go:89] "metrics-server-746fcd58dc-6pt8w" [db3c374a-7d3e-4ebd-9a71-c1245d62d2ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:14:27.528165 1556666 system_pods.go:89] "storage-provisioner" [3e67c2e9-9826-4557-b747-fec5992144f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:14:27.528189 1556666 system_pods.go:126] duration metric: took 96.616381ms to wait for k8s-apps to be running ...
	I0929 14:14:27.528211 1556666 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:14:27.528293 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:14:27.542062 1556666 system_svc.go:56] duration metric: took 13.832937ms WaitForService to wait for kubelet
	I0929 14:14:27.542130 1556666 kubeadm.go:578] duration metric: took 9.022710418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:14:27.542161 1556666 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:14:27.544948 1556666 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:14:27.545031 1556666 node_conditions.go:123] node cpu capacity is 2
	I0929 14:14:27.545058 1556666 node_conditions.go:105] duration metric: took 2.879218ms to run NodePressure ...
	I0929 14:14:27.545097 1556666 start.go:241] waiting for startup goroutines ...
	I0929 14:14:27.545120 1556666 start.go:246] waiting for cluster config update ...
	I0929 14:14:27.545144 1556666 start.go:255] writing updated cluster config ...
	I0929 14:14:27.545456 1556666 ssh_runner.go:195] Run: rm -f paused
	I0929 14:14:27.554430 1556666 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:14:27.563260 1556666 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-846n7" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:14:29.608788 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:32.070297 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:34.569602 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:37.069056 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:39.574455 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:42.070030 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:44.070122 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:46.570382 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:49.068692 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:51.068939 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:53.569240 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:56.069416 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:58.569070 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	I0929 14:15:00.160680 1556666 pod_ready.go:94] pod "coredns-66bc5c9577-846n7" is "Ready"
	I0929 14:15:00.160769 1556666 pod_ready.go:86] duration metric: took 32.597436105s for pod "coredns-66bc5c9577-846n7" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.164644 1556666 pod_ready.go:83] waiting for pod "etcd-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.229517 1556666 pod_ready.go:94] pod "etcd-no-preload-983174" is "Ready"
	I0929 14:15:00.229599 1556666 pod_ready.go:86] duration metric: took 64.919216ms for pod "etcd-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.283567 1556666 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.359530 1556666 pod_ready.go:94] pod "kube-apiserver-no-preload-983174" is "Ready"
	I0929 14:15:00.359628 1556666 pod_ready.go:86] duration metric: took 75.979002ms for pod "kube-apiserver-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.372119 1556666 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.383080 1556666 pod_ready.go:94] pod "kube-controller-manager-no-preload-983174" is "Ready"
	I0929 14:15:00.383176 1556666 pod_ready.go:86] duration metric: took 10.963097ms for pod "kube-controller-manager-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.486531 1556666 pod_ready.go:83] waiting for pod "kube-proxy-rjpsv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.885176 1556666 pod_ready.go:94] pod "kube-proxy-rjpsv" is "Ready"
	I0929 14:15:00.885204 1556666 pod_ready.go:86] duration metric: took 398.643571ms for pod "kube-proxy-rjpsv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.085775 1556666 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.485654 1556666 pod_ready.go:94] pod "kube-scheduler-no-preload-983174" is "Ready"
	I0929 14:15:01.485682 1556666 pod_ready.go:86] duration metric: took 399.876397ms for pod "kube-scheduler-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.485696 1556666 pod_ready.go:40] duration metric: took 33.931188768s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:15:01.548843 1556666 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:15:01.552089 1556666 out.go:179] * Done! kubectl is now configured to use "no-preload-983174" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:15:56 no-preload-983174 dockerd[893]: time="2025-09-29T14:15:56.572676623Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:15:56 no-preload-983174 dockerd[893]: time="2025-09-29T14:15:56.572817113Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:15:56 no-preload-983174 cri-dockerd[1211]: time="2025-09-29T14:15:56Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:16:02 no-preload-983174 dockerd[893]: time="2025-09-29T14:16:02.206858573Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:16:02 no-preload-983174 dockerd[893]: time="2025-09-29T14:16:02.302605181Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:17:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:14.171606641Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:17:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:14.172102408Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:17:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:14.175111286Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:17:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:14.175301705Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:17:19 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:19.392004119Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:17:19 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:19.587329108Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:17:19 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:19.587461638Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:17:19 no-preload-983174 cri-dockerd[1211]: time="2025-09-29T14:17:19Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:17:25 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:25.199453204Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:17:25 no-preload-983174 dockerd[893]: time="2025-09-29T14:17:25.296662287Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.384564329Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.573834314Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.573956088Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:20:01 no-preload-983174 cri-dockerd[1211]: time="2025-09-29T14:20:01Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.589551740Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.589762196Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.592813454Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.592855851Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:20:18 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:18.220808175Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:20:18 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:18.303590758Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f3eaee26dfbe       66749159455b3                                                                                         9 minutes ago       Running             storage-provisioner       3                   e1a72773bfa10       storage-provisioner
	12f24daebca75       138784d87c9c5                                                                                         9 minutes ago       Running             coredns                   1                   44bf744032360       coredns-66bc5c9577-846n7
	77c7f00743aa5       1611cd07b61d5                                                                                         9 minutes ago       Running             busybox                   1                   ac1d7c0d3591d       busybox
	d909070e1391e       6fc32d66c1411                                                                                         9 minutes ago       Running             kube-proxy                1                   af0122fda25d3       kube-proxy-rjpsv
	19afabc4b49f0       66749159455b3                                                                                         9 minutes ago       Exited              storage-provisioner       2                   e1a72773bfa10       storage-provisioner
	498c1ebdc119d       d291939e99406                                                                                         9 minutes ago       Running             kube-apiserver            1                   4000c3e6ecb98       kube-apiserver-no-preload-983174
	c5c159be5364e       996be7e86d9b3                                                                                         9 minutes ago       Running             kube-controller-manager   1                   2919f7749a9e1       kube-controller-manager-no-preload-983174
	a935dc35fdae2       a25f5ef9c34c3                                                                                         9 minutes ago       Running             kube-scheduler            1                   78e3a54cf3adf       kube-scheduler-no-preload-983174
	2c81a2420c7b3       a1894772a478e                                                                                         9 minutes ago       Running             etcd                      1                   c571121b4eb3c       etcd-no-preload-983174
	ca1a7d70e46d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              busybox                   0                   0a101000dc92b       busybox
	c2d36972d1b2b       138784d87c9c5                                                                                         10 minutes ago      Exited              coredns                   0                   d63538fd5fb45       coredns-66bc5c9577-846n7
	b1715dc9052f2       6fc32d66c1411                                                                                         10 minutes ago      Exited              kube-proxy                0                   1b91062fbe529       kube-proxy-rjpsv
	5754075776ddd       996be7e86d9b3                                                                                         11 minutes ago      Exited              kube-controller-manager   0                   84989d2afdf58       kube-controller-manager-no-preload-983174
	adf045c7d8305       a25f5ef9c34c3                                                                                         11 minutes ago      Exited              kube-scheduler            0                   d54cd597560fd       kube-scheduler-no-preload-983174
	5d5403194c3fc       d291939e99406                                                                                         11 minutes ago      Exited              kube-apiserver            0                   69eaf13796713       kube-apiserver-no-preload-983174
	be788d93ba9f2       a1894772a478e                                                                                         11 minutes ago      Exited              etcd                      0                   bf6f76dda718c       etcd-no-preload-983174
	
	
	==> coredns [12f24daebca7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35857 - 7983 "HINFO IN 4084271001323329853.8401138301617600447. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025371055s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [c2d36972d1b2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33520 - 54257 "HINFO IN 4588669308009460363.8948613153329900029. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033441071s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-983174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-983174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=no-preload-983174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_13_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:13:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-983174
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:23:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:20:21 +0000   Mon, 29 Sep 2025 14:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:20:21 +0000   Mon, 29 Sep 2025 14:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:20:21 +0000   Mon, 29 Sep 2025 14:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:20:21 +0000   Mon, 29 Sep 2025 14:13:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-983174
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8a151a63bd744a6813e9ba30655565b
	  System UUID:                4c406f57-abce-4a9d-b98b-1bca4b1d2f5e
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-846n7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-no-preload-983174                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kube-apiserver-no-preload-983174              250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-no-preload-983174     200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-rjpsv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-no-preload-983174              100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-6pt8w               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-srp8w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kpkl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m36s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node no-preload-983174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node no-preload-983174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node no-preload-983174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node no-preload-983174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node no-preload-983174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node no-preload-983174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                    node-controller  Node no-preload-983174 event: Registered Node no-preload-983174 in Controller
	  Normal   Starting                 9m45s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m45s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m45s (x8 over 9m45s)  kubelet          Node no-preload-983174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m45s (x8 over 9m45s)  kubelet          Node no-preload-983174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m45s (x7 over 9m45s)  kubelet          Node no-preload-983174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m45s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m34s                  node-controller  Node no-preload-983174 event: Registered Node no-preload-983174 in Controller
	
	
	==> dmesg <==
	[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [2c81a2420c7b] <==
	{"level":"warn","ts":"2025-09-29T14:14:22.458052Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.489245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.511888Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37852","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.647881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37862","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.678575Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.697833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.728400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.768113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.789405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.810549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.837667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.874652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.892958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.915415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.925929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.952069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.981654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.996297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.027497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.045056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.078307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.107435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.186436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.202770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.279520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38234","server-name":"","error":"EOF"}
	
	
	==> etcd [be788d93ba9f] <==
	{"level":"warn","ts":"2025-09-29T14:13:03.408426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.421712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.445762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.467629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.485624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.502805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.614717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40198","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:13:58.642170Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:13:58.642236Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"no-preload-983174","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-09-29T14:13:58.642343Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:13:59.721755Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:13:59.721836Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:13:59.721858Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-09-29T14:13:59.721959Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T14:13:59.721972Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722210Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722240Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:13:59.722248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722286Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722294Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:13:59.722300Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:13:59.725574Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-09-29T14:13:59.725644Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:13:59.725672Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-29T14:13:59.725678Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"no-preload-983174","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 14:24:03 up  6:06,  0 users,  load average: 0.50, 1.26, 3.00
	Linux no-preload-983174 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [498c1ebdc119] <==
	I0929 14:19:41.320195       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:20:25.268526       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:20:25.268617       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:20:25.268632       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:20:25.270754       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:20:25.270803       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:20:25.270817       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:20:55.295995       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:20:55.339222       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:21:57.751441       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:22:22.715506       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:22:25.269326       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:22:25.269424       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:22:25.269435       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:22:25.271562       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:22:25.271611       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:22:25.271625       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:23:16.624059       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:23:36.409193       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [5d5403194c3f] <==
	W0929 14:13:58.651831       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.651883       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.651925       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.651966       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652008       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652055       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652098       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652145       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652191       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652228       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652270       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652308       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652347       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652385       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652425       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652461       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652498       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653536       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653615       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653668       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653719       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653769       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653819       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653865       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0929 14:13:59.602355       1 cidrallocator.go:210] stopping ServiceCIDR Allocator Controller
	
	
	==> kube-controller-manager [5754075776dd] <==
	I0929 14:13:11.330682       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 14:13:11.340594       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 14:13:11.344047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 14:13:11.344357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 14:13:11.344370       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 14:13:11.344393       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 14:13:11.344403       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 14:13:11.345010       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 14:13:11.345077       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 14:13:11.345484       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 14:13:11.345525       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 14:13:11.345536       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 14:13:11.345547       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 14:13:11.345554       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 14:13:11.345561       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 14:13:11.345571       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 14:13:11.350337       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 14:13:11.345605       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 14:13:11.362515       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 14:13:11.362972       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-983174" podCIDRs=["10.244.0.0/24"]
	I0929 14:13:11.377760       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 14:13:11.399898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:13:11.439972       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:13:11.439996       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 14:13:11.440004       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [c5c159be5364] <==
	I0929 14:17:59.777596       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:18:29.690427       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:18:29.784676       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:18:59.695050       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:18:59.793076       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:19:29.699929       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:19:29.803502       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:19:59.704415       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:19:59.811065       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:20:29.715717       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:20:29.819672       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:20:59.720182       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:20:59.827183       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:21:29.726817       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:21:29.837251       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:21:59.733588       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:21:59.844766       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:22:29.752946       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:22:29.852804       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:22:59.757489       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:22:59.860295       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:23:29.762427       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:23:29.868323       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:23:59.766684       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:23:59.876449       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b1715dc9052f] <==
	I0929 14:13:13.789527       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:13:13.894173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:13:13.995056       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:13:13.995115       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:13:13.995222       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:13:14.031622       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:13:14.031764       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:13:14.045417       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:13:14.048968       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:13:14.049149       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:13:14.051260       1 config.go:200] "Starting service config controller"
	I0929 14:13:14.051471       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:13:14.051498       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:13:14.051502       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:13:14.051514       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:13:14.051522       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:13:14.056589       1 config.go:309] "Starting node config controller"
	I0929 14:13:14.056610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:13:14.056618       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:13:14.152324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:13:14.152326       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:13:14.152369       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d909070e1391] <==
	I0929 14:14:26.511941       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:14:26.582542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:14:26.688725       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:14:26.688767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:14:26.688844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:14:26.729752       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:14:26.729809       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:14:26.734039       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:14:26.734597       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:14:26.734622       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:14:26.736339       1 config.go:200] "Starting service config controller"
	I0929 14:14:26.736363       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:14:26.736390       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:14:26.736394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:14:26.736571       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:14:26.736589       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:14:26.741317       1 config.go:309] "Starting node config controller"
	I0929 14:14:26.741342       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:14:26.741350       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:14:26.836902       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:14:26.836909       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:14:26.836951       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a935dc35fdae] <==
	I0929 14:14:22.833590       1 serving.go:386] Generated self-signed cert in-memory
	W0929 14:14:24.232403       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 14:14:24.232441       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:14:24.232452       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 14:14:24.232460       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 14:14:24.316179       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 14:14:24.316210       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:14:24.319596       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 14:14:24.319716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:14:24.319734       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:14:24.319750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 14:14:24.421057       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [adf045c7d830] <==
	E0929 14:13:04.415937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:13:04.415973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 14:13:04.416017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 14:13:04.416064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 14:13:04.416214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:13:04.416382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:13:04.416455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:13:05.235830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:13:05.286231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:13:05.302609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:13:05.391347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:13:05.418259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:13:05.433615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:13:05.440133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:13:05.449255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 14:13:05.452759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 14:13:05.599585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:13:05.737315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0929 14:13:08.648329       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:13:58.530875       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 14:13:58.536043       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:13:58.536178       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 14:13:58.536194       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 14:13:58.536229       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 14:13:58.540922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 14:22:22 no-preload-983174 kubelet[1389]: E0929 14:22:22.157961    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:22:25 no-preload-983174 kubelet[1389]: E0929 14:22:25.157121    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:22:28 no-preload-983174 kubelet[1389]: E0929 14:22:28.159797    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:22:35 no-preload-983174 kubelet[1389]: E0929 14:22:35.156998    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:22:38 no-preload-983174 kubelet[1389]: E0929 14:22:38.157320    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:22:43 no-preload-983174 kubelet[1389]: E0929 14:22:43.157113    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:22:48 no-preload-983174 kubelet[1389]: E0929 14:22:48.159619    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:22:51 no-preload-983174 kubelet[1389]: E0929 14:22:51.157443    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:22:54 no-preload-983174 kubelet[1389]: E0929 14:22:54.157252    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:23:01 no-preload-983174 kubelet[1389]: E0929 14:23:01.156665    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:23:02 no-preload-983174 kubelet[1389]: E0929 14:23:02.159059    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:23:05 no-preload-983174 kubelet[1389]: E0929 14:23:05.157069    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:23:12 no-preload-983174 kubelet[1389]: E0929 14:23:12.157478    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:23:17 no-preload-983174 kubelet[1389]: E0929 14:23:17.157041    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:23:19 no-preload-983174 kubelet[1389]: E0929 14:23:19.157092    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:23:24 no-preload-983174 kubelet[1389]: E0929 14:23:24.156792    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:23:31 no-preload-983174 kubelet[1389]: E0929 14:23:31.157562    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:23:33 no-preload-983174 kubelet[1389]: E0929 14:23:33.157399    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:23:38 no-preload-983174 kubelet[1389]: E0929 14:23:38.159333    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:23:42 no-preload-983174 kubelet[1389]: E0929 14:23:42.160132    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:23:44 no-preload-983174 kubelet[1389]: E0929 14:23:44.163263    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:23:51 no-preload-983174 kubelet[1389]: E0929 14:23:51.157221    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:23:55 no-preload-983174 kubelet[1389]: E0929 14:23:55.156916    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:23:56 no-preload-983174 kubelet[1389]: E0929 14:23:56.159138    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:24:03 no-preload-983174 kubelet[1389]: E0929 14:24:03.157238    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	
	
	==> storage-provisioner [0f3eaee26dfb] <==
	W0929 14:23:39.402093       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:41.405302       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:41.410231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:43.412997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:43.423081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:45.428993       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:45.436016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:47.439154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:47.446833       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:49.449920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:49.454593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:51.457651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:51.462376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:53.465385       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:53.472336       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:55.476071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:55.480997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:57.483771       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:57.491415       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:59.494111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:23:59.500976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:24:01.503747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:24:01.508571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:24:03.512412       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:24:03.518443       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19afabc4b49f] <==
	I0929 14:14:26.359914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:14:27.368787       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-983174 -n no-preload-983174
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-983174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-983174 describe pod metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-983174 describe pod metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2: exit status 1 (85.099765ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-6pt8w" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-srp8w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kpkl2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-983174 describe pod metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-2srlk" [0ead75df-9638-4d39-af53-82c7b8b1bc64] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 14:22:50.245386 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:23:23.407365 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:23:59.883515 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-062731 -n old-k8s-version-062731
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:31:44.094104172 +0000 UTC m=+5397.359317533
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-062731 describe po kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context old-k8s-version-062731 describe po kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard:
Name:             kubernetes-dashboard-8694d4445c-2srlk
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             old-k8s-version-062731/192.168.85.2
Start Time:       Mon, 29 Sep 2025 14:13:38 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=8694d4445c
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-8694d4445c
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-r6v64 (ro)
Conditions:
Type              Status
Initialized       True 
Ready             False 
ContainersReady   False 
PodScheduled      True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-r6v64:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  18m                  default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk to old-k8s-version-062731
Normal   Pulling    16m (x4 over 18m)    kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     16m (x4 over 18m)    kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     16m (x4 over 18m)    kubelet            Error: ErrImagePull
Warning  Failed     16m (x6 over 18m)    kubelet            Error: ImagePullBackOff
Normal   BackOff    3m2s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-062731 logs kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-062731 logs kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard: exit status 1 (118.027055ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-8694d4445c-2srlk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context old-k8s-version-062731 logs kubernetes-dashboard-8694d4445c-2srlk -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-062731 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-062731
helpers_test.go:243: (dbg) docker inspect old-k8s-version-062731:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945",
	        "Created": "2025-09-29T14:11:34.338221643Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1550943,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:13:10.81954335Z",
	            "FinishedAt": "2025-09-29T14:13:09.92245346Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/hostname",
	        "HostsPath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/hosts",
	        "LogPath": "/var/lib/docker/containers/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945/5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945-json.log",
	        "Name": "/old-k8s-version-062731",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-062731:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-062731",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5f28f54ae5c50f482469e97b46287c692647518f467286c6789d45009577e945",
	                "LowerDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa/merged",
	                "UpperDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa/diff",
	                "WorkDir": "/var/lib/docker/overlay2/69dea7ead802aefaa9de4bbdf0ca143df3900bc5dc898f554b4cd111e13589aa/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-062731",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-062731/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-062731",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-062731",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-062731",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9fc68e444a9d0af669b47990d0163c5f87dffe2e2cbfc5be659a4669112c20ac",
	            "SandboxKey": "/var/run/docker/netns/9fc68e444a9d",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34286"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34287"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34290"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34288"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34289"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-062731": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:9e:18:d4:ee:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3df0265d4a1e81a524901d6aaa18a947950c22eeebbdc38ea9e67bd3e2f8ebbf",
	                    "EndpointID": "8845ad082418d90ac76bb0f232add59363516ccc1994a57e2918033907e4b693",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-062731",
	                        "5f28f54ae5c5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-062731 -n old-k8s-version-062731
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-062731 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p old-k8s-version-062731 logs -n 25: (1.483315564s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p kubenet-212797 sudo docker system info                                                                                                                                                                                                       │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status cri-docker --all --full --no-pager                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat cri-docker --no-pager                                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                                                                                                 │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                                                                                           │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cri-dockerd --version                                                                                                                                                                                                    │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat containerd --no-pager                                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /etc/containerd/config.toml                                                                                                                                                                                          │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo containerd config dump                                                                                                                                                                                                   │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │                     │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat crio --no-pager                                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                  │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo crio config                                                                                                                                                                                                              │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ delete  │ -p kubenet-212797                                                                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-062731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ stop    │ -p old-k8s-version-062731 --alsologtostderr -v=3                                                                                                                                                                                                │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-062731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ start   │ -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-983174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ stop    │ -p no-preload-983174 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:14 UTC │
	│ addons  │ enable dashboard -p no-preload-983174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:14 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:15 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:14:09
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:14:09.446915 1556666 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:14:09.447165 1556666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:14:09.447200 1556666 out.go:374] Setting ErrFile to fd 2...
	I0929 14:14:09.447220 1556666 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:14:09.447495 1556666 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:14:09.447946 1556666 out.go:368] Setting JSON to false
	I0929 14:14:09.449072 1556666 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":21402,"bootTime":1759133848,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:14:09.449209 1556666 start.go:140] virtualization:  
	I0929 14:14:09.452257 1556666 out.go:179] * [no-preload-983174] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:14:09.456099 1556666 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:14:09.456265 1556666 notify.go:220] Checking for updates...
	I0929 14:14:09.459654 1556666 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:14:09.462628 1556666 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:09.465578 1556666 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:14:09.468487 1556666 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:14:09.471340 1556666 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:14:09.474663 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:09.475308 1556666 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:14:09.502198 1556666 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:14:09.502336 1556666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:14:09.561225 1556666 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:14:09.551094641 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:14:09.561332 1556666 docker.go:318] overlay module found
	I0929 14:14:09.566299 1556666 out.go:179] * Using the docker driver based on existing profile
	I0929 14:14:09.569150 1556666 start.go:304] selected driver: docker
	I0929 14:14:09.569168 1556666 start.go:924] validating driver "docker" against &{Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:doc
ker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:09.569285 1556666 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:14:09.570017 1556666 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:14:09.624942 1556666 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:14:09.615982942 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:14:09.625279 1556666 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:14:09.625316 1556666 cni.go:84] Creating CNI manager for ""
	I0929 14:14:09.625393 1556666 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:14:09.625438 1556666 start.go:348] cluster config:
	{Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocke
t: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:09.628718 1556666 out.go:179] * Starting "no-preload-983174" primary control-plane node in "no-preload-983174" cluster
	I0929 14:14:09.631576 1556666 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:14:09.634419 1556666 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:14:09.637280 1556666 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:14:09.637361 1556666 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:14:09.637432 1556666 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/config.json ...
	I0929 14:14:09.637750 1556666 cache.go:107] acquiring lock: {Name:mkbf722085a8c6cd247df0776d9bc514bf99781b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.637851 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0929 14:14:09.637923 1556666 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5" took 177.364µs
	I0929 14:14:09.637940 1556666 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0929 14:14:09.637955 1556666 cache.go:107] acquiring lock: {Name:mk30f19321bc3b42d291063dc85a66705246f7e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638002 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 exists
	I0929 14:14:09.638013 1556666 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0" took 60.275µs
	I0929 14:14:09.638030 1556666 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-apiserver_v1.34.0 succeeded
	I0929 14:14:09.638041 1556666 cache.go:107] acquiring lock: {Name:mk2f793d2d4a07e670fda7f22f83aeba125cecc8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638080 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 exists
	I0929 14:14:09.638089 1556666 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0" took 49.937µs
	I0929 14:14:09.638096 1556666 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-controller-manager_v1.34.0 succeeded
	I0929 14:14:09.638106 1556666 cache.go:107] acquiring lock: {Name:mkc74eaa586dd62e4e7bb32f19e0778bae528158 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638136 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 exists
	I0929 14:14:09.638144 1556666 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0" took 39.968µs
	I0929 14:14:09.638151 1556666 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-scheduler_v1.34.0 succeeded
	I0929 14:14:09.638160 1556666 cache.go:107] acquiring lock: {Name:mk3285eeb8c57d45d5a563781eb999cc08d9baf7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638189 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 exists
	I0929 14:14:09.638197 1556666 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0" took 39.114µs
	I0929 14:14:09.638204 1556666 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/kube-proxy_v1.34.0 succeeded
	I0929 14:14:09.638213 1556666 cache.go:107] acquiring lock: {Name:mkbc5650bf66f5bda3f443eba33f59d2953325c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638242 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 exists
	I0929 14:14:09.638251 1556666 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1" took 39.09µs
	I0929 14:14:09.638257 1556666 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/pause_3.10.1 succeeded
	I0929 14:14:09.638266 1556666 cache.go:107] acquiring lock: {Name:mk1e873b26d63631af61d7ed1e9134ed28465b53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638295 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 exists
	I0929 14:14:09.638304 1556666 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0" took 39.049µs
	I0929 14:14:09.638310 1556666 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I0929 14:14:09.638330 1556666 cache.go:107] acquiring lock: {Name:mk303304602324c8e2b92b82ec131997d8ec523d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.638360 1556666 cache.go:115] /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I0929 14:14:09.638369 1556666 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1" took 46.787µs
	I0929 14:14:09.638375 1556666 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/images/arm64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I0929 14:14:09.638381 1556666 cache.go:87] Successfully saved all images to host disk.
	I0929 14:14:09.656713 1556666 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:14:09.656737 1556666 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:14:09.656754 1556666 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:14:09.656776 1556666 start.go:360] acquireMachinesLock for no-preload-983174: {Name:mke1e7fc5da9d04523b73b29b2664621e2ac37f2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:14:09.656829 1556666 start.go:364] duration metric: took 38.516µs to acquireMachinesLock for "no-preload-983174"
	I0929 14:14:09.656855 1556666 start.go:96] Skipping create...Using existing machine configuration
	I0929 14:14:09.656864 1556666 fix.go:54] fixHost starting: 
	I0929 14:14:09.657131 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:09.674254 1556666 fix.go:112] recreateIfNeeded on no-preload-983174: state=Stopped err=<nil>
	W0929 14:14:09.674292 1556666 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 14:14:09.677584 1556666 out.go:252] * Restarting existing docker container for "no-preload-983174" ...
	I0929 14:14:09.677673 1556666 cli_runner.go:164] Run: docker start no-preload-983174
	I0929 14:14:09.938637 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:09.957785 1556666 kic.go:430] container "no-preload-983174" state is running.
	I0929 14:14:09.959526 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:09.982824 1556666 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/config.json ...
	I0929 14:14:09.983047 1556666 machine.go:93] provisionDockerMachine start ...
	I0929 14:14:09.983106 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:10.007116 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:10.007471 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:10.007482 1556666 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:14:10.008211 1556666 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54734->127.0.0.1:34291: read: connection reset by peer
	I0929 14:14:13.168318 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-983174
	
	I0929 14:14:13.168401 1556666 ubuntu.go:182] provisioning hostname "no-preload-983174"
	I0929 14:14:13.168478 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:13.191152 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:13.191553 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:13.191572 1556666 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-983174 && echo "no-preload-983174" | sudo tee /etc/hostname
	I0929 14:14:13.354864 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-983174
	
	I0929 14:14:13.354956 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:13.373283 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:13.373591 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:13.373619 1556666 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-983174' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-983174/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-983174' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:14:13.520952 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:14:13.520977 1556666 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:14:13.520994 1556666 ubuntu.go:190] setting up certificates
	I0929 14:14:13.521004 1556666 provision.go:84] configureAuth start
	I0929 14:14:13.521063 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:13.538844 1556666 provision.go:143] copyHostCerts
	I0929 14:14:13.538915 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:14:13.538938 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:14:13.539019 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:14:13.539171 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:14:13.539183 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:14:13.539212 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:14:13.539284 1556666 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:14:13.539295 1556666 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:14:13.539321 1556666 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:14:13.539380 1556666 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.no-preload-983174 san=[127.0.0.1 192.168.76.2 localhost minikube no-preload-983174]
	I0929 14:14:14.175612 1556666 provision.go:177] copyRemoteCerts
	I0929 14:14:14.175688 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:14:14.175734 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.193690 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:14.293882 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 14:14:14.318335 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0929 14:14:14.344180 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:14:14.369452 1556666 provision.go:87] duration metric: took 848.423896ms to configureAuth
	I0929 14:14:14.369478 1556666 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:14:14.369677 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:14.369735 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.387401 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.387709 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.387723 1556666 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:14:14.529052 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:14:14.529074 1556666 ubuntu.go:71] root file system type: overlay
	I0929 14:14:14.529186 1556666 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:14:14.529255 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.547682 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.547997 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.548083 1556666 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:14:14.705061 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:14:14.705158 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.723963 1556666 main.go:141] libmachine: Using SSH client type: native
	I0929 14:14:14.724277 1556666 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34291 <nil> <nil>}
	I0929 14:14:14.724302 1556666 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:14:14.871746 1556666 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:14:14.871808 1556666 machine.go:96] duration metric: took 4.888752094s to provisionDockerMachine
	I0929 14:14:14.871835 1556666 start.go:293] postStartSetup for "no-preload-983174" (driver="docker")
	I0929 14:14:14.871865 1556666 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:14:14.871951 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:14:14.872027 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:14.889467 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:14.990105 1556666 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:14:14.993594 1556666 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:14:14.993625 1556666 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:14:14.993636 1556666 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:14:14.993642 1556666 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:14:14.993655 1556666 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:14:14.993707 1556666 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:14:14.993801 1556666 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:14:14.993924 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:14:15.010275 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:14:15.041050 1556666 start.go:296] duration metric: took 169.180506ms for postStartSetup
	I0929 14:14:15.041206 1556666 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:14:15.041284 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.059737 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.157816 1556666 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:14:15.162824 1556666 fix.go:56] duration metric: took 5.505952464s for fixHost
	I0929 14:14:15.162849 1556666 start.go:83] releasing machines lock for "no-preload-983174", held for 5.506005527s
	I0929 14:14:15.162917 1556666 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-983174
	I0929 14:14:15.180675 1556666 ssh_runner.go:195] Run: cat /version.json
	I0929 14:14:15.180722 1556666 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:14:15.180777 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.180726 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:15.198974 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.200600 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:15.292199 1556666 ssh_runner.go:195] Run: systemctl --version
	I0929 14:14:15.427571 1556666 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:14:15.431914 1556666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:14:15.452046 1556666 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:14:15.452120 1556666 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:14:15.461413 1556666 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 14:14:15.461440 1556666 start.go:495] detecting cgroup driver to use...
	I0929 14:14:15.461473 1556666 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:14:15.461565 1556666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:14:15.477405 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:14:15.489101 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:14:15.499317 1556666 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:14:15.499406 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:14:15.512856 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:14:15.522949 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:14:15.533163 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:14:15.543072 1556666 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:14:15.552630 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:14:15.563081 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:14:15.573609 1556666 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:14:15.583981 1556666 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:14:15.593828 1556666 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:14:15.602598 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:15.696246 1556666 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:14:15.784831 1556666 start.go:495] detecting cgroup driver to use...
	I0929 14:14:15.784911 1556666 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:14:15.784990 1556666 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:14:15.799531 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:14:15.815605 1556666 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:14:15.840157 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:14:15.852831 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:14:15.865897 1556666 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:14:15.883856 1556666 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:14:15.887405 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:14:15.896336 1556666 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:14:15.915875 1556666 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:14:16.027307 1556666 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:14:16.115830 1556666 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:14:16.116008 1556666 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:14:16.139611 1556666 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:14:16.151714 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:16.249049 1556666 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:14:16.778694 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:14:16.790316 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:14:16.802021 1556666 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 14:14:16.815179 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:14:16.827094 1556666 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:14:16.928082 1556666 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:14:17.034122 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.145418 1556666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:14:17.161368 1556666 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:14:17.174566 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.275531 1556666 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:14:17.385986 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:14:17.400398 1556666 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:14:17.400473 1556666 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:14:17.404874 1556666 start.go:563] Will wait 60s for crictl version
	I0929 14:14:17.404984 1556666 ssh_runner.go:195] Run: which crictl
	I0929 14:14:17.408474 1556666 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:14:17.529650 1556666 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:14:17.529725 1556666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:14:17.554294 1556666 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:14:17.585653 1556666 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:14:17.585796 1556666 cli_runner.go:164] Run: docker network inspect no-preload-983174 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:14:17.607543 1556666 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:14:17.611447 1556666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:14:17.622262 1556666 kubeadm.go:875] updating cluster {Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServer
IPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Moun
tMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:14:17.622371 1556666 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:14:17.622426 1556666 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:14:17.641266 1556666 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:14:17.641291 1556666 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:14:17.641301 1556666 kubeadm.go:926] updating node { 192.168.76.2 8443 v1.34.0 docker true true} ...
	I0929 14:14:17.641412 1556666 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-983174 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:14:17.641479 1556666 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:14:17.706589 1556666 cni.go:84] Creating CNI manager for ""
	I0929 14:14:17.706614 1556666 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:14:17.706628 1556666 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:14:17.706649 1556666 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-983174 NodeName:no-preload-983174 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:14:17.706779 1556666 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-983174"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:14:17.706850 1556666 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:14:17.715757 1556666 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:14:17.715829 1556666 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:14:17.724341 1556666 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0929 14:14:17.742721 1556666 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:14:17.761018 1556666 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2218 bytes)
	I0929 14:14:17.780275 1556666 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:14:17.783823 1556666 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:14:17.794621 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:17.897706 1556666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:14:17.912470 1556666 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174 for IP: 192.168.76.2
	I0929 14:14:17.912492 1556666 certs.go:194] generating shared ca certs ...
	I0929 14:14:17.912534 1556666 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:17.912697 1556666 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:14:17.912749 1556666 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:14:17.912761 1556666 certs.go:256] generating profile certs ...
	I0929 14:14:17.912856 1556666 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.key
	I0929 14:14:17.912930 1556666 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.key.8135a500
	I0929 14:14:17.912982 1556666 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.key
	I0929 14:14:17.913106 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:14:17.913160 1556666 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:14:17.913173 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:14:17.913206 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:14:17.913232 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:14:17.913261 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:14:17.913318 1556666 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:14:17.913997 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:14:17.956896 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:14:17.985873 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:14:18.028989 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:14:18.063448 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0929 14:14:18.096280 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 14:14:18.147356 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:14:18.179221 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:14:18.209546 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:14:18.242132 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:14:18.273433 1556666 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:14:18.303036 1556666 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:14:18.322286 1556666 ssh_runner.go:195] Run: openssl version
	I0929 14:14:18.327639 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:14:18.342520 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.346354 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.346432 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:14:18.353769 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:14:18.362808 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:14:18.372034 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.375576 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.375643 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:14:18.382977 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:14:18.392026 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:14:18.402458 1556666 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.405833 1556666 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.405908 1556666 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:14:18.412741 1556666 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:14:18.421756 1556666 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:14:18.425436 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 14:14:18.432235 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 14:14:18.439307 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 14:14:18.446668 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 14:14:18.453723 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 14:14:18.460904 1556666 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 14:14:18.467893 1556666 kubeadm.go:392] StartCluster: {Name:no-preload-983174 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:no-preload-983174 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs
:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMS
ize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:14:18.468068 1556666 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:14:18.485585 1556666 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:14:18.497293 1556666 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 14:14:18.497323 1556666 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 14:14:18.497382 1556666 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 14:14:18.506278 1556666 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 14:14:18.506918 1556666 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-983174" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:18.507237 1556666 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1125775/kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-983174" cluster setting kubeconfig missing "no-preload-983174" context setting]
	I0929 14:14:18.507707 1556666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.509252 1556666 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 14:14:18.517799 1556666 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 14:14:18.517889 1556666 kubeadm.go:593] duration metric: took 20.559326ms to restartPrimaryControlPlane
	I0929 14:14:18.517914 1556666 kubeadm.go:394] duration metric: took 50.028401ms to StartCluster
	I0929 14:14:18.517962 1556666 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.518060 1556666 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:14:18.519066 1556666 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:14:18.519359 1556666 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:14:18.519673 1556666 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:14:18.519746 1556666 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:14:18.519847 1556666 addons.go:69] Setting storage-provisioner=true in profile "no-preload-983174"
	I0929 14:14:18.519867 1556666 addons.go:238] Setting addon storage-provisioner=true in "no-preload-983174"
	W0929 14:14:18.519877 1556666 addons.go:247] addon storage-provisioner should already be in state true
	I0929 14:14:18.519854 1556666 addons.go:69] Setting default-storageclass=true in profile "no-preload-983174"
	I0929 14:14:18.519904 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.519920 1556666 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-983174"
	I0929 14:14:18.520315 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.520394 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.520954 1556666 addons.go:69] Setting metrics-server=true in profile "no-preload-983174"
	I0929 14:14:18.520978 1556666 addons.go:238] Setting addon metrics-server=true in "no-preload-983174"
	W0929 14:14:18.520986 1556666 addons.go:247] addon metrics-server should already be in state true
	I0929 14:14:18.521025 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.521459 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.524831 1556666 addons.go:69] Setting dashboard=true in profile "no-preload-983174"
	I0929 14:14:18.524862 1556666 addons.go:238] Setting addon dashboard=true in "no-preload-983174"
	W0929 14:14:18.524872 1556666 addons.go:247] addon dashboard should already be in state true
	I0929 14:14:18.524910 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.525477 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.526051 1556666 out.go:179] * Verifying Kubernetes components...
	I0929 14:14:18.530866 1556666 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:14:18.567077 1556666 addons.go:238] Setting addon default-storageclass=true in "no-preload-983174"
	W0929 14:14:18.567104 1556666 addons.go:247] addon default-storageclass should already be in state true
	I0929 14:14:18.567131 1556666 host.go:66] Checking if "no-preload-983174" exists ...
	I0929 14:14:18.567570 1556666 cli_runner.go:164] Run: docker container inspect no-preload-983174 --format={{.State.Status}}
	I0929 14:14:18.581520 1556666 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:14:18.584559 1556666 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:14:18.584588 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:14:18.584654 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.593397 1556666 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 14:14:18.593473 1556666 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 14:14:18.597233 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 14:14:18.597259 1556666 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 14:14:18.597325 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.603277 1556666 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 14:14:18.607154 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 14:14:18.607180 1556666 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 14:14:18.607257 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.629305 1556666 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:18.629327 1556666 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:14:18.629390 1556666 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-983174
	I0929 14:14:18.668010 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.668341 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.688724 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.701428 1556666 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34291 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/no-preload-983174/id_rsa Username:docker}
	I0929 14:14:18.731581 1556666 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:14:18.818506 1556666 node_ready.go:35] waiting up to 6m0s for node "no-preload-983174" to be "Ready" ...
	I0929 14:14:18.852403 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 14:14:18.852425 1556666 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 14:14:18.898176 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 14:14:18.898249 1556666 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0929 14:14:18.910910 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 14:14:18.910979 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 14:14:18.947482 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:14:18.978364 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 14:14:18.978391 1556666 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 14:14:19.033790 1556666 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:19.033863 1556666 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 14:14:19.075517 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 14:14:19.075595 1556666 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 14:14:19.079301 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:19.154767 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:19.219263 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 14:14:19.219348 1556666 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 14:14:19.420384 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 14:14:19.420459 1556666 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0929 14:14:19.739905 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 14:14:19.739987 1556666 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	W0929 14:14:19.746335 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.746436 1556666 retry.go:31] will retry after 131.359244ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 14:14:19.768963 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.769045 1556666 retry.go:31] will retry after 340.512991ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.792479 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 14:14:19.792677 1556666 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0929 14:14:19.878811 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W0929 14:14:19.892912 1556666 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.892989 1556666 retry.go:31] will retry after 313.861329ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:14:19.937588 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 14:14:19.937617 1556666 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 14:14:19.997110 1556666 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:14:19.997138 1556666 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 14:14:20.026643 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:14:20.110232 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:14:20.207752 1556666 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:14:24.291836 1556666 node_ready.go:49] node "no-preload-983174" is "Ready"
	I0929 14:14:24.291865 1556666 node_ready.go:38] duration metric: took 5.473270305s for node "no-preload-983174" to be "Ready" ...
	I0929 14:14:24.291882 1556666 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:14:24.291942 1556666 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:14:26.299144 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.420249689s)
	I0929 14:14:26.299258 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.272581766s)
	I0929 14:14:26.299391 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (6.189124116s)
	I0929 14:14:26.302416 1556666 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p no-preload-983174 addons enable metrics-server
	
	I0929 14:14:26.411012 1556666 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.203209897s)
	I0929 14:14:26.411051 1556666 addons.go:479] Verifying addon metrics-server=true in "no-preload-983174"
	I0929 14:14:26.411223 1556666 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.119270583s)
	I0929 14:14:26.411237 1556666 api_server.go:72] duration metric: took 7.891819741s to wait for apiserver process to appear ...
	I0929 14:14:26.411242 1556666 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:14:26.411258 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:26.415317 1556666 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass, metrics-server
	I0929 14:14:26.418321 1556666 addons.go:514] duration metric: took 7.898562435s for enable addons: enabled=[storage-provisioner dashboard default-storageclass metrics-server]
	I0929 14:14:26.422832 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:14:26.422855 1556666 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:14:26.911383 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:26.926872 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:14:26.926902 1556666 api_server.go:103] status: https://192.168.76.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:14:27.412099 1556666 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I0929 14:14:27.421007 1556666 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I0929 14:14:27.422346 1556666 api_server.go:141] control plane version: v1.34.0
	I0929 14:14:27.422373 1556666 api_server.go:131] duration metric: took 1.011125009s to wait for apiserver health ...
	I0929 14:14:27.422383 1556666 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:14:27.428732 1556666 system_pods.go:59] 8 kube-system pods found
	I0929 14:14:27.428777 1556666 system_pods.go:61] "coredns-66bc5c9577-846n7" [dd192e93-efcd-416c-b3f2-c56860e96667] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:14:27.428786 1556666 system_pods.go:61] "etcd-no-preload-983174" [5aa66d56-4e0b-426f-af8c-880f7e3c02db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:14:27.428794 1556666 system_pods.go:61] "kube-apiserver-no-preload-983174" [e9e9910a-f91a-40e2-8152-50c95dc16563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:14:27.428801 1556666 system_pods.go:61] "kube-controller-manager-no-preload-983174" [4cdb0775-7e84-4c1c-90b6-a8d68514159c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:14:27.428829 1556666 system_pods.go:61] "kube-proxy-rjpsv" [640460b1-abcd-4490-a152-ceb13067ffb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:14:27.428851 1556666 system_pods.go:61] "kube-scheduler-no-preload-983174" [5fb52905-6a97-4feb-bc63-6a67be970b9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:14:27.428865 1556666 system_pods.go:61] "metrics-server-746fcd58dc-6pt8w" [db3c374a-7d3e-4ebd-9a71-c1245d62d2ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:14:27.428873 1556666 system_pods.go:61] "storage-provisioner" [3e67c2e9-9826-4557-b747-fec5992144f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:14:27.428883 1556666 system_pods.go:74] duration metric: took 6.494789ms to wait for pod list to return data ...
	I0929 14:14:27.428904 1556666 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:14:27.431458 1556666 default_sa.go:45] found service account: "default"
	I0929 14:14:27.431530 1556666 default_sa.go:55] duration metric: took 2.610441ms for default service account to be created ...
	I0929 14:14:27.431555 1556666 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:14:27.527907 1556666 system_pods.go:86] 8 kube-system pods found
	I0929 14:14:27.527993 1556666 system_pods.go:89] "coredns-66bc5c9577-846n7" [dd192e93-efcd-416c-b3f2-c56860e96667] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:14:27.528017 1556666 system_pods.go:89] "etcd-no-preload-983174" [5aa66d56-4e0b-426f-af8c-880f7e3c02db] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:14:27.528052 1556666 system_pods.go:89] "kube-apiserver-no-preload-983174" [e9e9910a-f91a-40e2-8152-50c95dc16563] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:14:27.528078 1556666 system_pods.go:89] "kube-controller-manager-no-preload-983174" [4cdb0775-7e84-4c1c-90b6-a8d68514159c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:14:27.528099 1556666 system_pods.go:89] "kube-proxy-rjpsv" [640460b1-abcd-4490-a152-ceb13067ffb1] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:14:27.528119 1556666 system_pods.go:89] "kube-scheduler-no-preload-983174" [5fb52905-6a97-4feb-bc63-6a67be970b9b] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:14:27.528137 1556666 system_pods.go:89] "metrics-server-746fcd58dc-6pt8w" [db3c374a-7d3e-4ebd-9a71-c1245d62d2ec] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:14:27.528165 1556666 system_pods.go:89] "storage-provisioner" [3e67c2e9-9826-4557-b747-fec5992144f5] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:14:27.528189 1556666 system_pods.go:126] duration metric: took 96.616381ms to wait for k8s-apps to be running ...
	I0929 14:14:27.528211 1556666 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:14:27.528293 1556666 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:14:27.542062 1556666 system_svc.go:56] duration metric: took 13.832937ms WaitForService to wait for kubelet
	I0929 14:14:27.542130 1556666 kubeadm.go:578] duration metric: took 9.022710418s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:14:27.542161 1556666 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:14:27.544948 1556666 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:14:27.545031 1556666 node_conditions.go:123] node cpu capacity is 2
	I0929 14:14:27.545058 1556666 node_conditions.go:105] duration metric: took 2.879218ms to run NodePressure ...
	I0929 14:14:27.545097 1556666 start.go:241] waiting for startup goroutines ...
	I0929 14:14:27.545120 1556666 start.go:246] waiting for cluster config update ...
	I0929 14:14:27.545144 1556666 start.go:255] writing updated cluster config ...
	I0929 14:14:27.545456 1556666 ssh_runner.go:195] Run: rm -f paused
	I0929 14:14:27.554430 1556666 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:14:27.563260 1556666 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-846n7" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:14:29.608788 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:32.070297 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:34.569602 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:37.069056 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:39.574455 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:42.070030 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:44.070122 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:46.570382 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:49.068692 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:51.068939 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:53.569240 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:56.069416 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	W0929 14:14:58.569070 1556666 pod_ready.go:104] pod "coredns-66bc5c9577-846n7" is not "Ready", error: <nil>
	I0929 14:15:00.160680 1556666 pod_ready.go:94] pod "coredns-66bc5c9577-846n7" is "Ready"
	I0929 14:15:00.160769 1556666 pod_ready.go:86] duration metric: took 32.597436105s for pod "coredns-66bc5c9577-846n7" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.164644 1556666 pod_ready.go:83] waiting for pod "etcd-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.229517 1556666 pod_ready.go:94] pod "etcd-no-preload-983174" is "Ready"
	I0929 14:15:00.229599 1556666 pod_ready.go:86] duration metric: took 64.919216ms for pod "etcd-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.283567 1556666 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.359530 1556666 pod_ready.go:94] pod "kube-apiserver-no-preload-983174" is "Ready"
	I0929 14:15:00.359628 1556666 pod_ready.go:86] duration metric: took 75.979002ms for pod "kube-apiserver-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.372119 1556666 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.383080 1556666 pod_ready.go:94] pod "kube-controller-manager-no-preload-983174" is "Ready"
	I0929 14:15:00.383176 1556666 pod_ready.go:86] duration metric: took 10.963097ms for pod "kube-controller-manager-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.486531 1556666 pod_ready.go:83] waiting for pod "kube-proxy-rjpsv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:00.885176 1556666 pod_ready.go:94] pod "kube-proxy-rjpsv" is "Ready"
	I0929 14:15:00.885204 1556666 pod_ready.go:86] duration metric: took 398.643571ms for pod "kube-proxy-rjpsv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.085775 1556666 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.485654 1556666 pod_ready.go:94] pod "kube-scheduler-no-preload-983174" is "Ready"
	I0929 14:15:01.485682 1556666 pod_ready.go:86] duration metric: took 399.876397ms for pod "kube-scheduler-no-preload-983174" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:15:01.485696 1556666 pod_ready.go:40] duration metric: took 33.931188768s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:15:01.548843 1556666 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:15:01.552089 1556666 out.go:179] * Done! kubectl is now configured to use "no-preload-983174" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:19:09 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:09.011177381Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:19:31 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:31.244233229Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:19:31 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:31.440216334Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:19:31 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:19:31.440603727Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:19:31 old-k8s-version-062731 cri-dockerd[1211]: time="2025-09-29T14:19:31Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:24:17 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:17.015501596Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:24:17 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:17.015543361Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:24:17 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:17.018441207Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:24:17 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:17.018486598Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:24:19 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:19.041024628Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:24:19 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:19.129231686Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:24:33 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:33.243043863Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:24:33 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:33.434626677Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:24:33 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:24:33.434758330Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:24:33 old-k8s-version-062731 cri-dockerd[1211]: time="2025-09-29T14:24:33Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:29:19 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:19.011015251Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:29:19 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:19.011090165Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:29:19 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:19.014208781Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:29:19 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:19.014255977Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:29:20 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:20.045507330Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:29:20 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:20.134872040Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:29:42 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:42.260052994Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:29:42 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:42.478778851Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:29:42 old-k8s-version-062731 dockerd[895]: time="2025-09-29T14:29:42.479043600Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:29:42 old-k8s-version-062731 cri-dockerd[1211]: time="2025-09-29T14:29:42Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	67ee15eacad42       ba04bb24b9575                                                                                         17 minutes ago      Running             storage-provisioner       2                   e7b278f3fdb3f       storage-provisioner
	be636cd58da4e       1611cd07b61d5                                                                                         18 minutes ago      Running             busybox                   1                   4d45421593112       busybox
	7c2d1182a5bf4       97e04611ad434                                                                                         18 minutes ago      Running             coredns                   1                   ea4213f958bd9       coredns-5dd5756b68-pld27
	4de42700ff466       ba04bb24b9575                                                                                         18 minutes ago      Exited              storage-provisioner       1                   e7b278f3fdb3f       storage-provisioner
	43868fe5fc274       940f54a5bcae9                                                                                         18 minutes ago      Running             kube-proxy                1                   cc27fe045f039       kube-proxy-lb4zs
	78bb7c9cf3983       46cc66ccc7c19                                                                                         18 minutes ago      Running             kube-controller-manager   1                   20ad2afb69ba0       kube-controller-manager-old-k8s-version-062731
	5a2886e8d0f34       9cdd6470f48c8                                                                                         18 minutes ago      Running             etcd                      1                   9b30aedda13a5       etcd-old-k8s-version-062731
	a92699fef46e7       762dce4090c5f                                                                                         18 minutes ago      Running             kube-scheduler            1                   fe636016daf88       kube-scheduler-old-k8s-version-062731
	4c82d04b6c3a7       00543d2fe5d71                                                                                         18 minutes ago      Running             kube-apiserver            1                   2eb7cfdc2d5f6       kube-apiserver-old-k8s-version-062731
	bb666f6a8daba       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Exited              busybox                   0                   dc48916618137       busybox
	0bebd4d0f0d74       940f54a5bcae9                                                                                         19 minutes ago      Exited              kube-proxy                0                   4572067f55e02       kube-proxy-lb4zs
	7dc4aaf0f43a8       97e04611ad434                                                                                         19 minutes ago      Exited              coredns                   0                   c3aba0235cdac       coredns-5dd5756b68-pld27
	a0ace307b5dab       9cdd6470f48c8                                                                                         19 minutes ago      Exited              etcd                      0                   f8063aea5ea3f       etcd-old-k8s-version-062731
	1eb33dcdfff48       46cc66ccc7c19                                                                                         19 minutes ago      Exited              kube-controller-manager   0                   e9f30fee80eeb       kube-controller-manager-old-k8s-version-062731
	d19b472de2d44       762dce4090c5f                                                                                         19 minutes ago      Exited              kube-scheduler            0                   8bf13620d0efe       kube-scheduler-old-k8s-version-062731
	9103484f3ae11       00543d2fe5d71                                                                                         19 minutes ago      Exited              kube-apiserver            0                   5f278e55346c6       kube-apiserver-old-k8s-version-062731
	
	
	==> coredns [7c2d1182a5bf] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44160 - 50023 "HINFO IN 4089803307241079152.5277922079326627374. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025726781s
	
	
	==> coredns [7dc4aaf0f43a] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/arm64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 8aa94104b4dae56b00431f7362ac05b997af2246775de35dc2eb361b0707b2fa7199f9ddfdba27fdef1331b76d09c41700f6cb5d00836dabab7c0df8e651283f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:57448 - 38097 "HINFO IN 1055238920401735314.5747428167574435741. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.022753011s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-062731
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=old-k8s-version-062731
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=old-k8s-version-062731
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_12_01_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:11:58 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-062731
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:31:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:28:43 +0000   Mon, 29 Sep 2025 14:11:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:28:43 +0000   Mon, 29 Sep 2025 14:11:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:28:43 +0000   Mon, 29 Sep 2025 14:11:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:28:43 +0000   Mon, 29 Sep 2025 14:12:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    old-k8s-version-062731
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 00b3587badb34185a9c0f5e1a840ae3c
	  System UUID:                fb2a2127-d734-4ef5-84b1-07fd32e62650
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-5dd5756b68-pld27                          100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-old-k8s-version-062731                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kube-apiserver-old-k8s-version-062731             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-old-k8s-version-062731    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-lb4zs                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-old-k8s-version-062731             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-57f55c9bc5-fs4wn                   100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         18m
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-jmjhf        0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-2srlk             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node old-k8s-version-062731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     19m                kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientPID
	  Normal  Starting                 19m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    19m                kubelet          Node old-k8s-version-062731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  19m                kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientMemory
	  Normal  NodeNotReady             19m                kubelet          Node old-k8s-version-062731 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                19m                kubelet          Node old-k8s-version-062731 status is now: NodeReady
	  Normal  RegisteredNode           19m                node-controller  Node old-k8s-version-062731 event: Registered Node old-k8s-version-062731 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-062731 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node old-k8s-version-062731 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-062731 event: Registered Node old-k8s-version-062731 in Controller
	
	
	==> dmesg <==
	[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [5a2886e8d0f3] <==
	{"level":"info","ts":"2025-09-29T14:13:20.98519Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-09-29T14:13:20.98545Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"9f0758e1c58a86ed","initial-advertise-peer-urls":["https://192.168.85.2:2380"],"listen-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.85.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-09-29T14:13:20.985474Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-09-29T14:13:20.985519Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:13:20.985526Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:13:22.702576Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed is starting a new election at term 2"}
	{"level":"info","ts":"2025-09-29T14:13:22.702838Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became pre-candidate at term 2"}
	{"level":"info","ts":"2025-09-29T14:13:22.702984Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgPreVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T14:13:22.703076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.703184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.703265Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.703361Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 3"}
	{"level":"info","ts":"2025-09-29T14:13:22.708492Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-062731 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T14:13:22.708731Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:13:22.710019Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T14:13:22.708753Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:13:22.711077Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T14:13:22.740542Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T14:13:22.740586Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T14:23:22.751275Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":968}
	{"level":"info","ts":"2025-09-29T14:23:22.80675Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":968,"took":"55.220231ms","hash":1777596759}
	{"level":"info","ts":"2025-09-29T14:23:22.806804Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1777596759,"revision":968,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T14:28:22.757176Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1217}
	{"level":"info","ts":"2025-09-29T14:28:22.758273Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1217,"took":"710.901µs","hash":4079979184}
	{"level":"info","ts":"2025-09-29T14:28:22.758317Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4079979184,"revision":1217,"compact-revision":968}
	
	
	==> etcd [a0ace307b5da] <==
	{"level":"info","ts":"2025-09-29T14:11:53.527847Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became candidate at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.527939Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed received MsgVoteResp from 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.528068Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9f0758e1c58a86ed became leader at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.528157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9f0758e1c58a86ed elected leader 9f0758e1c58a86ed at term 2"}
	{"level":"info","ts":"2025-09-29T14:11:53.529572Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"9f0758e1c58a86ed","local-member-attributes":"{Name:old-k8s-version-062731 ClientURLs:[https://192.168.85.2:2379]}","request-path":"/0/members/9f0758e1c58a86ed/attributes","cluster-id":"68eaea490fab4e05","publish-timeout":"7s"}
	{"level":"info","ts":"2025-09-29T14:11:53.529755Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:11:53.532589Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-09-29T14:11:53.533852Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.85.2:2379"}
	{"level":"info","ts":"2025-09-29T14:11:53.534112Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:11:53.532739Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-09-29T14:11:53.539614Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68eaea490fab4e05","local-member-id":"9f0758e1c58a86ed","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:11:53.573537Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:11:53.54019Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-09-29T14:11:53.606342Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-09-29T14:11:53.595857Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-09-29T14:12:59.336607Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:12:59.336684Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"old-k8s-version-062731","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"warn","ts":"2025-09-29T14:12:59.336775Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:12:59.336846Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:12:59.423616Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:12:59.423731Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-29T14:12:59.423767Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-09-29T14:12:59.426078Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:12:59.426157Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:12:59.426166Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"old-k8s-version-062731","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:31:45 up  6:14,  0 users,  load average: 0.68, 0.73, 2.02
	Linux old-k8s-version-062731 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [4c82d04b6c3a] <==
	E0929 14:29:35.840214       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:29:45.841935       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["catch-all","exempt","global-default","leader-election","node-high","system","workload-high","workload-low"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:29:55.842424       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:30:05.843170       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:30:15.844099       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	I0929 14:30:25.523741       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.142.168:443: connect: connection refused
	I0929 14:30:25.523771       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 14:30:25.845347       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:30:35.846515       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:30:45.847505       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:30:55.847996       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:31:05.849143       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:31:15.849697       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["leader-election","node-high","system","workload-high","workload-low","catch-all","exempt","global-default"] items=[{},{},{},{},{},{},{},{}]
	I0929 14:31:25.524401       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.107.142.168:443: connect: connection refused
	I0929 14:31:25.524431       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E0929 14:31:25.850639       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["node-high","system","workload-high","workload-low","catch-all","exempt","global-default","leader-election"] items=[{},{},{},{},{},{},{},{}]
	W0929 14:31:26.743724       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 14:31:26.743953       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0929 14:31:26.744041       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:31:26.745315       1 handler_proxy.go:93] no RequestInfo found in the context
	E0929 14:31:26.745525       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0929 14:31:26.745541       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0929 14:31:35.851545       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E0929 14:31:45.852746       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-apiserver [9103484f3ae1] <==
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:09.379920       1 logging.go:59] [core] [Channel #69 SubChannel #71] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:09.434680       1 logging.go:59] [core] [Channel #37 SubChannel #38] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:09.466823       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {
	  "Addr": "127.0.0.1:2379",
	  "ServerName": "127.0.0.1",
	  "Attributes": null,
	  "BalancerAttributes": null,
	  "Type": 0,
	  "Metadata": null
	}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [1eb33dcdfff4] <==
	I0929 14:12:14.182836       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-z6bfh"
	I0929 14:12:14.239311       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="479.636948ms"
	I0929 14:12:14.260139       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.778113ms"
	I0929 14:12:14.260246       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.894µs"
	I0929 14:12:14.260347       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.635µs"
	I0929 14:12:14.299098       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="77.572µs"
	I0929 14:12:16.798523       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="80.239µs"
	I0929 14:12:17.416232       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I0929 14:12:17.472987       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-z6bfh"
	I0929 14:12:17.512248       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.781191ms"
	I0929 14:12:17.540935       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="28.634664ms"
	I0929 14:12:17.542175       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="45.006µs"
	I0929 14:12:17.854446       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.128µs"
	I0929 14:12:26.971023       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="112.592µs"
	I0929 14:12:27.035345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="130.569µs"
	I0929 14:12:27.336232       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="54.105µs"
	I0929 14:12:27.337674       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="51.003µs"
	I0929 14:12:45.202792       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="58.578211ms"
	I0929 14:12:45.206272       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="943.739µs"
	I0929 14:12:58.488901       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I0929 14:12:58.547637       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-fs4wn"
	I0929 14:12:58.659940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="172.089724ms"
	I0929 14:12:58.714537       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="54.347242ms"
	I0929 14:12:58.779199       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="64.432346ms"
	I0929 14:12:58.779532       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="87.23µs"
	
	
	==> kube-controller-manager [78bb7c9cf398] <==
	I0929 14:27:08.860262       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:27:38.224629       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:27:38.869203       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:28:08.229756       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:28:08.877681       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:28:38.235233       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:28:38.885702       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:29:08.240046       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:29:08.893850       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 14:29:35.016054       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.295µs"
	I0929 14:29:35.030734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="124.087µs"
	E0929 14:29:38.246538       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:29:38.902214       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 14:29:47.011194       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="71.804µs"
	I0929 14:29:51.009804       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="94.943µs"
	I0929 14:29:55.014646       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="185.208µs"
	E0929 14:30:08.250777       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:30:08.909776       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0929 14:30:09.010408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="96.181µs"
	E0929 14:30:38.256038       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:30:38.918348       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:31:08.261136       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:31:08.930488       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0929 14:31:38.266794       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0929 14:31:38.938264       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [0bebd4d0f0d7] <==
	I0929 14:12:17.072304       1 server_others.go:69] "Using iptables proxy"
	I0929 14:12:17.094715       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 14:12:17.206485       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:12:17.216258       1 server_others.go:152] "Using iptables Proxier"
	I0929 14:12:17.216479       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 14:12:17.216576       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 14:12:17.216688       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 14:12:17.217019       1 server.go:846] "Version info" version="v1.28.0"
	I0929 14:12:17.217379       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:12:17.256978       1 config.go:188] "Starting service config controller"
	I0929 14:12:17.257036       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 14:12:17.257076       1 config.go:97] "Starting endpoint slice config controller"
	I0929 14:12:17.257080       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 14:12:17.259590       1 config.go:315] "Starting node config controller"
	I0929 14:12:17.259719       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 14:12:17.357940       1 shared_informer.go:318] Caches are synced for service config
	I0929 14:12:17.358034       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0929 14:12:17.362325       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [43868fe5fc27] <==
	I0929 14:13:27.520972       1 server_others.go:69] "Using iptables proxy"
	I0929 14:13:27.540237       1 node.go:141] Successfully retrieved node IP: 192.168.85.2
	I0929 14:13:27.580049       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:13:27.582293       1 server_others.go:152] "Using iptables Proxier"
	I0929 14:13:27.582332       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I0929 14:13:27.582340       1 server_others.go:438] "Defaulting to no-op detect-local"
	I0929 14:13:27.582368       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0929 14:13:27.582575       1 server.go:846] "Version info" version="v1.28.0"
	I0929 14:13:27.582585       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:13:27.583541       1 config.go:188] "Starting service config controller"
	I0929 14:13:27.583567       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0929 14:13:27.583586       1 config.go:97] "Starting endpoint slice config controller"
	I0929 14:13:27.583590       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0929 14:13:27.586326       1 config.go:315] "Starting node config controller"
	I0929 14:13:27.586342       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0929 14:13:27.688937       1 shared_informer.go:318] Caches are synced for node config
	I0929 14:13:27.688986       1 shared_informer.go:318] Caches are synced for service config
	I0929 14:13:27.689022       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [a92699fef46e] <==
	I0929 14:13:22.289701       1 serving.go:348] Generated self-signed cert in-memory
	W0929 14:13:25.597194       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 14:13:25.597297       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:13:25.597327       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 14:13:25.597367       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 14:13:25.694356       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I0929 14:13:25.694602       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:13:25.699662       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0929 14:13:25.702604       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:13:25.702828       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 14:13:25.703029       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	W0929 14:13:25.741824       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0929 14:13:25.742102       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0929 14:13:25.820273       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [d19b472de2d4] <==
	W0929 14:11:58.975590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0929 14:11:58.975709       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0929 14:11:58.975849       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0929 14:11:58.975881       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0929 14:11:58.976022       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.976043       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.976119       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0929 14:11:58.976137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0929 14:11:58.976214       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0929 14:11:58.976232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0929 14:11:58.978006       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0929 14:11:58.978038       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:11:58.978329       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.978354       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.978444       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.978462       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.978542       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0929 14:11:58.978577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0929 14:11:58.978683       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0929 14:11:58.978702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0929 14:12:00.070915       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0929 14:12:59.549472       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0929 14:12:59.549887       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0929 14:12:59.550118       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0929 14:12:59.550978       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 14:29:57 old-k8s-version-062731 kubelet[1397]: E0929 14:29:57.995478    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:30:04 old-k8s-version-062731 kubelet[1397]: E0929 14:30:04.996397    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:30:08 old-k8s-version-062731 kubelet[1397]: E0929 14:30:08.995620    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:30:11 old-k8s-version-062731 kubelet[1397]: E0929 14:30:11.994797    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:30:18 old-k8s-version-062731 kubelet[1397]: E0929 14:30:18.998119    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:30:19 old-k8s-version-062731 kubelet[1397]: E0929 14:30:19.994474    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:30:22 old-k8s-version-062731 kubelet[1397]: E0929 14:30:22.996229    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:30:30 old-k8s-version-062731 kubelet[1397]: E0929 14:30:30.995386    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:30:32 old-k8s-version-062731 kubelet[1397]: E0929 14:30:32.997685    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:30:33 old-k8s-version-062731 kubelet[1397]: E0929 14:30:33.995357    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:30:44 old-k8s-version-062731 kubelet[1397]: E0929 14:30:44.997092    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:30:44 old-k8s-version-062731 kubelet[1397]: E0929 14:30:44.998222    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:30:46 old-k8s-version-062731 kubelet[1397]: E0929 14:30:46.995374    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:30:59 old-k8s-version-062731 kubelet[1397]: E0929 14:30:59.995522    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:30:59 old-k8s-version-062731 kubelet[1397]: E0929 14:30:59.996014    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:31:01 old-k8s-version-062731 kubelet[1397]: E0929 14:31:01.994632    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:31:11 old-k8s-version-062731 kubelet[1397]: E0929 14:31:11.995117    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:31:13 old-k8s-version-062731 kubelet[1397]: E0929 14:31:13.995098    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:31:15 old-k8s-version-062731 kubelet[1397]: E0929 14:31:15.994543    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:31:24 old-k8s-version-062731 kubelet[1397]: E0929 14:31:24.997684    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:31:27 old-k8s-version-062731 kubelet[1397]: E0929 14:31:27.995097    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:31:30 old-k8s-version-062731 kubelet[1397]: E0929 14:31:30.997257    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	Sep 29 14:31:38 old-k8s-version-062731 kubelet[1397]: E0929 14:31:38.998939    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\"\"" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-2srlk" podUID="0ead75df-9638-4d39-af53-82c7b8b1bc64"
	Sep 29 14:31:41 old-k8s-version-062731 kubelet[1397]: E0929 14:31:41.994822    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-fs4wn" podUID="40bef347-d14e-4938-a46b-5ce53f50ccae"
	Sep 29 14:31:44 old-k8s-version-062731 kubelet[1397]: E0929 14:31:44.999106    1397 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\"\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-jmjhf" podUID="f0526aa0-4a9e-40fa-9580-77adad166379"
	
	
	==> storage-provisioner [4de42700ff46] <==
	I0929 14:13:27.860119       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:13:57.868123       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [67ee15eacad4] <==
	I0929 14:14:13.188873       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0929 14:14:13.209840       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0929 14:14:13.209918       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0929 14:14:30.627249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0929 14:14:30.627670       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-062731_7e7ea937-39f1-4124-8351-bb9fa1f395c7!
	I0929 14:14:30.627403       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"484ee430-38ff-40b6-a402-1d5e1b0d6e78", APIVersion:"v1", ResourceVersion:"737", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-062731_7e7ea937-39f1-4124-8351-bb9fa1f395c7 became leader
	I0929 14:14:30.728290       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-062731_7e7ea937-39f1-4124-8351-bb9fa1f395c7!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-062731 -n old-k8s-version-062731
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-062731 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk
helpers_test.go:282: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context old-k8s-version-062731 describe pod metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context old-k8s-version-062731 describe pod metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk: exit status 1 (93.600634ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-fs4wn" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-5f989dc9cf-jmjhf" not found
	Error from server (NotFound): pods "kubernetes-dashboard-8694d4445c-2srlk" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context old-k8s-version-062731 describe pod metrics-server-57f55c9bc5-fs4wn dashboard-metrics-scraper-5f989dc9cf-jmjhf kubernetes-dashboard-8694d4445c-2srlk: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (543.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-kpkl2" [80983d01-da8e-4456-bdd9-c6b9c062762d] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 14:25:01.311353 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:25:03.685042 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:25:20.566372 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:25:38.296832 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:25:52.915742 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:26:26.747277 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:26:53.256697 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:27:00.344606 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:27:02.972770 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:27:03.455646 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:27:50.245339 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:28:26.519496 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:28:59.883052 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:29:02.358846 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:29:13.317672 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:30:01.311685 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:30:03.684424 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:30:20.566343 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:30:25.423694 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:30:38.296712 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:30:52.915822 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:31:24.380778 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-983174 -n no-preload-983174
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:33:05.30620592 +0000 UTC m=+5478.571419281
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-983174 describe po kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context no-preload-983174 describe po kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-kpkl2
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             no-preload-983174/192.168.76.2
Start Time:       Mon, 29 Sep 2025 14:14:30 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.10
IPs:
IP:           10.244.0.10
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f8dx6 (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-f8dx6:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2 to no-preload-983174
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m30s (x64 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m30s (x64 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context no-preload-983174 logs kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-983174 logs kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard: exit status 1 (122.363558ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-kpkl2" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context no-preload-983174 logs kubernetes-dashboard-855c9754f9-kpkl2 -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-983174 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-983174
helpers_test.go:243: (dbg) docker inspect no-preload-983174:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d",
	        "Created": "2025-09-29T14:12:28.585280253Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1556794,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:14:09.708801143Z",
	            "FinishedAt": "2025-09-29T14:14:08.873362901Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/hostname",
	        "HostsPath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/hosts",
	        "LogPath": "/var/lib/docker/containers/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d/f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d-json.log",
	        "Name": "/no-preload-983174",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-983174:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-983174",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "f588a9dd031f7bd3dd61b9e38a8d3303c88dd8db21040780f759984cabd4e75d",
	                "LowerDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644/merged",
	                "UpperDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644/diff",
	                "WorkDir": "/var/lib/docker/overlay2/d921a03d5757f431a924575c97db02cbf463270d6a3676dd15d1844e7f80e644/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-983174",
	                "Source": "/var/lib/docker/volumes/no-preload-983174/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-983174",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-983174",
	                "name.minikube.sigs.k8s.io": "no-preload-983174",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "a2bb087298b729fe50b6b8b6349476b95b71940799a5347c1d150f1268cad335",
	            "SandboxKey": "/var/run/docker/netns/a2bb087298b7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34291"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34292"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34295"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34293"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34294"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-983174": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:d3:56:45:98:50",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "c8b107e545669f34bd8328f74f0d3a601475a7ffdc4b152c45ea58429e814854",
	                    "EndpointID": "77d1452714aefd40dff3a851f99aacaf7f24c13581907fb53a55aac0a5146483",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-983174",
	                        "f588a9dd031f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-983174 -n no-preload-983174
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-983174 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p no-preload-983174 logs -n 25: (1.360582005s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────────
───┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────────
───┤
	│ ssh     │ -p kubenet-212797 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat containerd --no-pager                                                                                                                                                                                      │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo cat /etc/containerd/config.toml                                                                                                                                                                                          │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo containerd config dump                                                                                                                                                                                                   │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │                     │
	│ ssh     │ -p kubenet-212797 sudo systemctl cat crio --no-pager                                                                                                                                                                                            │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                  │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ ssh     │ -p kubenet-212797 sudo crio config                                                                                                                                                                                                              │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ delete  │ -p kubenet-212797                                                                                                                                                                                                                               │ kubenet-212797         │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-062731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                    │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:12 UTC │
	│ stop    │ -p old-k8s-version-062731 --alsologtostderr -v=3                                                                                                                                                                                                │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:12 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-062731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                               │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ start   │ -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0 │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ addons  │ enable metrics-server -p no-preload-983174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:13 UTC │
	│ stop    │ -p no-preload-983174 --alsologtostderr -v=3                                                                                                                                                                                                     │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:13 UTC │ 29 Sep 25 14:14 UTC │
	│ addons  │ enable dashboard -p no-preload-983174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:14 UTC │
	│ start   │ -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                       │ no-preload-983174      │ jenkins │ v1.37.0 │ 29 Sep 25 14:14 UTC │ 29 Sep 25 14:15 UTC │
	│ image   │ old-k8s-version-062731 image list --format=json                                                                                                                                                                                                 │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:31 UTC │ 29 Sep 25 14:31 UTC │
	│ pause   │ -p old-k8s-version-062731 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:31 UTC │ 29 Sep 25 14:31 UTC │
	│ unpause │ -p old-k8s-version-062731 --alsologtostderr -v=1                                                                                                                                                                                                │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:31 UTC │ 29 Sep 25 14:31 UTC │
	│ delete  │ -p old-k8s-version-062731                                                                                                                                                                                                                       │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:31 UTC │ 29 Sep 25 14:31 UTC │
	│ delete  │ -p old-k8s-version-062731                                                                                                                                                                                                                       │ old-k8s-version-062731 │ jenkins │ v1.37.0 │ 29 Sep 25 14:31 UTC │ 29 Sep 25 14:31 UTC │
	│ start   │ -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-641794     │ jenkins │ v1.37.0 │ 29 Sep 25 14:31 UTC │ 29 Sep 25 14:33 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────────
───┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:31:52
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:31:52.833287 1569999 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:31:52.833484 1569999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:31:52.833513 1569999 out.go:374] Setting ErrFile to fd 2...
	I0929 14:31:52.833533 1569999 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:31:52.833815 1569999 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:31:52.834316 1569999 out.go:368] Setting JSON to false
	I0929 14:31:52.835603 1569999 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22465,"bootTime":1759133848,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:31:52.835708 1569999 start.go:140] virtualization:  
	I0929 14:31:52.839897 1569999 out.go:179] * [embed-certs-641794] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:31:52.843469 1569999 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:31:52.843527 1569999 notify.go:220] Checking for updates...
	I0929 14:31:52.849740 1569999 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:31:52.852947 1569999 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:31:52.856634 1569999 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:31:52.859550 1569999 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:31:52.862959 1569999 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:31:52.866657 1569999 config.go:182] Loaded profile config "no-preload-983174": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:31:52.866768 1569999 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:31:52.901598 1569999 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:31:52.901732 1569999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:31:52.970510 1569999 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:31:52.960031359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:31:52.970624 1569999 docker.go:318] overlay module found
	I0929 14:31:52.973842 1569999 out.go:179] * Using the docker driver based on user configuration
	I0929 14:31:52.976685 1569999 start.go:304] selected driver: docker
	I0929 14:31:52.976712 1569999 start.go:924] validating driver "docker" against <nil>
	I0929 14:31:52.976728 1569999 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:31:52.977568 1569999 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:31:53.039982 1569999 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:31:53.030626512 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:31:53.040131 1569999 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 14:31:53.040363 1569999 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:31:53.043313 1569999 out.go:179] * Using Docker driver with root privileges
	I0929 14:31:53.046281 1569999 cni.go:84] Creating CNI manager for ""
	I0929 14:31:53.046369 1569999 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:31:53.046383 1569999 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 14:31:53.046470 1569999 start.go:348] cluster config:
	{Name:embed-certs-641794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-641794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISock
et: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseI
nterval:1m0s}
	I0929 14:31:53.049631 1569999 out.go:179] * Starting "embed-certs-641794" primary control-plane node in "embed-certs-641794" cluster
	I0929 14:31:53.052360 1569999 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:31:53.055340 1569999 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:31:53.058247 1569999 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:31:53.058306 1569999 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 14:31:53.058317 1569999 cache.go:58] Caching tarball of preloaded images
	I0929 14:31:53.058370 1569999 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:31:53.058430 1569999 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 14:31:53.058440 1569999 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 14:31:53.058545 1569999 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/config.json ...
	I0929 14:31:53.058566 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/config.json: {Name:mkc976cb07359de3547e90b802465e5a7f5b7ea4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:31:53.080476 1569999 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:31:53.080499 1569999 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:31:53.080555 1569999 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:31:53.080583 1569999 start.go:360] acquireMachinesLock for embed-certs-641794: {Name:mkf71112567d23ec725f0f747cdd5cd2c98d27c1 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:31:53.080707 1569999 start.go:364] duration metric: took 103.058µs to acquireMachinesLock for "embed-certs-641794"
	I0929 14:31:53.080742 1569999 start.go:93] Provisioning new machine with config: &{Name:embed-certs-641794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-641794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServ
erNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:31:53.080821 1569999 start.go:125] createHost starting for "" (driver="docker")
	I0929 14:31:53.087779 1569999 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0929 14:31:53.088032 1569999 start.go:159] libmachine.API.Create for "embed-certs-641794" (driver="docker")
	I0929 14:31:53.088068 1569999 client.go:168] LocalClient.Create starting
	I0929 14:31:53.088136 1569999 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem
	I0929 14:31:53.088171 1569999 main.go:141] libmachine: Decoding PEM data...
	I0929 14:31:53.088185 1569999 main.go:141] libmachine: Parsing certificate...
	I0929 14:31:53.088252 1569999 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem
	I0929 14:31:53.088270 1569999 main.go:141] libmachine: Decoding PEM data...
	I0929 14:31:53.088284 1569999 main.go:141] libmachine: Parsing certificate...
	I0929 14:31:53.088674 1569999 cli_runner.go:164] Run: docker network inspect embed-certs-641794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0929 14:31:53.110354 1569999 cli_runner.go:211] docker network inspect embed-certs-641794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0929 14:31:53.110450 1569999 network_create.go:284] running [docker network inspect embed-certs-641794] to gather additional debugging logs...
	I0929 14:31:53.110477 1569999 cli_runner.go:164] Run: docker network inspect embed-certs-641794
	W0929 14:31:53.126063 1569999 cli_runner.go:211] docker network inspect embed-certs-641794 returned with exit code 1
	I0929 14:31:53.126095 1569999 network_create.go:287] error running [docker network inspect embed-certs-641794]: docker network inspect embed-certs-641794: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-641794 not found
	I0929 14:31:53.126107 1569999 network_create.go:289] output of [docker network inspect embed-certs-641794]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-641794 not found
	
	** /stderr **
	I0929 14:31:53.126226 1569999 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:31:53.147267 1569999 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85cc826cc833 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e6:9d:b6:86:22:ad} reservation:<nil>}
	I0929 14:31:53.147522 1569999 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-aee8219e46ea IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:26:48:5c:79:e0:92} reservation:<nil>}
	I0929 14:31:53.147820 1569999 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-415857c413ae IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:ba:0e:aa:55:e2:18} reservation:<nil>}
	I0929 14:31:53.148074 1569999 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-c8b107e54566 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:5e:80:ae:be:89:e1} reservation:<nil>}
	I0929 14:31:53.148476 1569999 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b7520}
	I0929 14:31:53.148532 1569999 network_create.go:124] attempt to create docker network embed-certs-641794 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I0929 14:31:53.148642 1569999 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-641794 embed-certs-641794
	I0929 14:31:53.208782 1569999 network_create.go:108] docker network embed-certs-641794 192.168.85.0/24 created
	I0929 14:31:53.208815 1569999 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-641794" container
	I0929 14:31:53.208886 1569999 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0929 14:31:53.224960 1569999 cli_runner.go:164] Run: docker volume create embed-certs-641794 --label name.minikube.sigs.k8s.io=embed-certs-641794 --label created_by.minikube.sigs.k8s.io=true
	I0929 14:31:53.246548 1569999 oci.go:103] Successfully created a docker volume embed-certs-641794
	I0929 14:31:53.246639 1569999 cli_runner.go:164] Run: docker run --rm --name embed-certs-641794-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-641794 --entrypoint /usr/bin/test -v embed-certs-641794:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0929 14:31:53.797508 1569999 oci.go:107] Successfully prepared a docker volume embed-certs-641794
	I0929 14:31:53.797559 1569999 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:31:53.797579 1569999 kic.go:194] Starting extracting preloaded images to volume ...
	I0929 14:31:53.797659 1569999 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-641794:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0929 14:31:58.063986 1569999 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4:/preloaded.tar:ro -v embed-certs-641794:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.266286287s)
	I0929 14:31:58.064020 1569999 kic.go:203] duration metric: took 4.266437509s to extract preloaded images to volume ...
	W0929 14:31:58.064179 1569999 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0929 14:31:58.064308 1569999 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0929 14:31:58.120732 1569999 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-641794 --name embed-certs-641794 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-641794 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-641794 --network embed-certs-641794 --ip 192.168.85.2 --volume embed-certs-641794:/var --security-opt apparmor=unconfined --memory=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0929 14:31:58.436380 1569999 cli_runner.go:164] Run: docker container inspect embed-certs-641794 --format={{.State.Running}}
	I0929 14:31:58.461899 1569999 cli_runner.go:164] Run: docker container inspect embed-certs-641794 --format={{.State.Status}}
	I0929 14:31:58.486427 1569999 cli_runner.go:164] Run: docker exec embed-certs-641794 stat /var/lib/dpkg/alternatives/iptables
	I0929 14:31:58.543051 1569999 oci.go:144] the created container "embed-certs-641794" has a running status.
	I0929 14:31:58.543087 1569999 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa...
	I0929 14:31:59.810944 1569999 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0929 14:31:59.831330 1569999 cli_runner.go:164] Run: docker container inspect embed-certs-641794 --format={{.State.Status}}
	I0929 14:31:59.848765 1569999 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0929 14:31:59.848790 1569999 kic_runner.go:114] Args: [docker exec --privileged embed-certs-641794 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0929 14:31:59.903150 1569999 cli_runner.go:164] Run: docker container inspect embed-certs-641794 --format={{.State.Status}}
	I0929 14:31:59.922694 1569999 machine.go:93] provisionDockerMachine start ...
	I0929 14:31:59.922808 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:31:59.941072 1569999 main.go:141] libmachine: Using SSH client type: native
	I0929 14:31:59.941439 1569999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34296 <nil> <nil>}
	I0929 14:31:59.941455 1569999 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:32:00.163603 1569999 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-641794
	
	I0929 14:32:00.163636 1569999 ubuntu.go:182] provisioning hostname "embed-certs-641794"
	I0929 14:32:00.163731 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:00.212646 1569999 main.go:141] libmachine: Using SSH client type: native
	I0929 14:32:00.212988 1569999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34296 <nil> <nil>}
	I0929 14:32:00.213007 1569999 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-641794 && echo "embed-certs-641794" | sudo tee /etc/hostname
	I0929 14:32:00.416590 1569999 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-641794
	
	I0929 14:32:00.416702 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:00.439472 1569999 main.go:141] libmachine: Using SSH client type: native
	I0929 14:32:00.439827 1569999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34296 <nil> <nil>}
	I0929 14:32:00.439863 1569999 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-641794' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-641794/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-641794' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:32:00.585753 1569999 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:32:00.585786 1569999 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:32:00.585930 1569999 ubuntu.go:190] setting up certificates
	I0929 14:32:00.585945 1569999 provision.go:84] configureAuth start
	I0929 14:32:00.586064 1569999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-641794
	I0929 14:32:00.604731 1569999 provision.go:143] copyHostCerts
	I0929 14:32:00.604802 1569999 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:32:00.604826 1569999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:32:00.604907 1569999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:32:00.605002 1569999 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:32:00.605013 1569999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:32:00.605040 1569999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:32:00.605103 1569999 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:32:00.605112 1569999 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:32:00.605137 1569999 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:32:00.605198 1569999 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.embed-certs-641794 san=[127.0.0.1 192.168.85.2 embed-certs-641794 localhost minikube]
	I0929 14:32:01.104104 1569999 provision.go:177] copyRemoteCerts
	I0929 14:32:01.104179 1569999 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:32:01.104224 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:01.123765 1569999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa Username:docker}
	I0929 14:32:01.225943 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:32:01.255980 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0929 14:32:01.280963 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 14:32:01.307584 1569999 provision.go:87] duration metric: took 721.6085ms to configureAuth
	I0929 14:32:01.307611 1569999 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:32:01.307813 1569999 config.go:182] Loaded profile config "embed-certs-641794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:32:01.307874 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:01.325787 1569999 main.go:141] libmachine: Using SSH client type: native
	I0929 14:32:01.326098 1569999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34296 <nil> <nil>}
	I0929 14:32:01.326111 1569999 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:32:01.469260 1569999 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:32:01.469308 1569999 ubuntu.go:71] root file system type: overlay
	I0929 14:32:01.469446 1569999 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:32:01.469541 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:01.488339 1569999 main.go:141] libmachine: Using SSH client type: native
	I0929 14:32:01.488686 1569999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34296 <nil> <nil>}
	I0929 14:32:01.488777 1569999 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:32:01.641257 1569999 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:32:01.641345 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:01.658132 1569999 main.go:141] libmachine: Using SSH client type: native
	I0929 14:32:01.658440 1569999 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34296 <nil> <nil>}
	I0929 14:32:01.658458 1569999 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:32:02.531640 1569999 main.go:141] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-09-03 20:57:01.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-09-29 14:32:01.635452861 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0929 14:32:02.531731 1569999 machine.go:96] duration metric: took 2.609016645s to provisionDockerMachine
	I0929 14:32:02.531759 1569999 client.go:171] duration metric: took 9.443683902s to LocalClient.Create
	I0929 14:32:02.531801 1569999 start.go:167] duration metric: took 9.443770368s to libmachine.API.Create "embed-certs-641794"
	I0929 14:32:02.531827 1569999 start.go:293] postStartSetup for "embed-certs-641794" (driver="docker")
	I0929 14:32:02.531852 1569999 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:32:02.531961 1569999 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:32:02.532025 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:02.550873 1569999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa Username:docker}
	I0929 14:32:02.649648 1569999 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:32:02.653020 1569999 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:32:02.653055 1569999 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:32:02.653068 1569999 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:32:02.653075 1569999 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:32:02.653089 1569999 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:32:02.653189 1569999 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:32:02.653295 1569999 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:32:02.653458 1569999 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:32:02.662567 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:32:02.686923 1569999 start.go:296] duration metric: took 155.068045ms for postStartSetup
	I0929 14:32:02.687301 1569999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-641794
	I0929 14:32:02.707789 1569999 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/config.json ...
	I0929 14:32:02.708116 1569999 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:32:02.708175 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:02.741960 1569999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa Username:docker}
	I0929 14:32:02.837939 1569999 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:32:02.843008 1569999 start.go:128] duration metric: took 9.762165802s to createHost
	I0929 14:32:02.843032 1569999 start.go:83] releasing machines lock for "embed-certs-641794", held for 9.76231288s
	I0929 14:32:02.843132 1569999 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-641794
	I0929 14:32:02.860614 1569999 ssh_runner.go:195] Run: cat /version.json
	I0929 14:32:02.860675 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:02.860928 1569999 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:32:02.860987 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:02.879458 1569999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa Username:docker}
	I0929 14:32:02.888749 1569999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa Username:docker}
	I0929 14:32:02.975929 1569999 ssh_runner.go:195] Run: systemctl --version
	I0929 14:32:03.110012 1569999 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:32:03.114451 1569999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:32:03.139932 1569999 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:32:03.140048 1569999 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:32:03.170720 1569999 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0929 14:32:03.170750 1569999 start.go:495] detecting cgroup driver to use...
	I0929 14:32:03.170782 1569999 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:32:03.170877 1569999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:32:03.187797 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:32:03.197448 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:32:03.207777 1569999 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:32:03.207863 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:32:03.218323 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:32:03.227915 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:32:03.241649 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:32:03.253442 1569999 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:32:03.262664 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:32:03.272551 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:32:03.282816 1569999 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:32:03.292620 1569999 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:32:03.301337 1569999 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:32:03.309913 1569999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:32:03.402767 1569999 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:32:03.489874 1569999 start.go:495] detecting cgroup driver to use...
	I0929 14:32:03.489971 1569999 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:32:03.490053 1569999 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:32:03.504467 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:32:03.518377 1569999 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:32:03.546871 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:32:03.558325 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:32:03.570786 1569999 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:32:03.588542 1569999 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:32:03.592120 1569999 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:32:03.601068 1569999 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:32:03.619268 1569999 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:32:03.714716 1569999 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:32:03.810211 1569999 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:32:03.810343 1569999 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:32:03.831379 1569999 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:32:03.846922 1569999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:32:03.943456 1569999 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:32:04.367693 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:32:04.380696 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:32:04.394021 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:32:04.406467 1569999 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:32:04.502433 1569999 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:32:04.600001 1569999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:32:04.690765 1569999 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:32:04.705604 1569999 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:32:04.717608 1569999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:32:04.814326 1569999 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:32:04.895431 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:32:04.909968 1569999 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:32:04.910092 1569999 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:32:04.913972 1569999 start.go:563] Will wait 60s for crictl version
	I0929 14:32:04.914098 1569999 ssh_runner.go:195] Run: which crictl
	I0929 14:32:04.917863 1569999 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:32:04.960816 1569999 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:32:04.960929 1569999 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:32:04.985296 1569999 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:32:05.019336 1569999 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:32:05.019468 1569999 cli_runner.go:164] Run: docker network inspect embed-certs-641794 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:32:05.036678 1569999 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I0929 14:32:05.040226 1569999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:32:05.054877 1569999 kubeadm.go:875] updating cluster {Name:embed-certs-641794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-641794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServe
rIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:32:05.055002 1569999 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:32:05.055059 1569999 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:32:05.075211 1569999 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 14:32:05.075235 1569999 docker.go:621] Images already preloaded, skipping extraction
	I0929 14:32:05.075302 1569999 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:32:05.094542 1569999 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0929 14:32:05.094569 1569999 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:32:05.094581 1569999 kubeadm.go:926] updating node { 192.168.85.2 8443 v1.34.0 docker true true} ...
	I0929 14:32:05.094669 1569999 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-641794 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:embed-certs-641794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:32:05.094737 1569999 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:32:05.156198 1569999 cni.go:84] Creating CNI manager for ""
	I0929 14:32:05.156226 1569999 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:32:05.156237 1569999 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:32:05.156260 1569999 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-641794 NodeName:embed-certs-641794 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:32:05.156386 1569999 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "embed-certs-641794"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:32:05.156460 1569999 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:32:05.165725 1569999 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:32:05.165797 1569999 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:32:05.174695 1569999 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0929 14:32:05.193526 1569999 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:32:05.212433 1569999 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2219 bytes)
	I0929 14:32:05.230669 1569999 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:32:05.234129 1569999 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:32:05.249672 1569999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:32:05.342048 1569999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:32:05.357784 1569999 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794 for IP: 192.168.85.2
	I0929 14:32:05.357859 1569999 certs.go:194] generating shared ca certs ...
	I0929 14:32:05.357890 1569999 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:05.358071 1569999 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:32:05.358133 1569999 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:32:05.358154 1569999 certs.go:256] generating profile certs ...
	I0929 14:32:05.358238 1569999 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/client.key
	I0929 14:32:05.358274 1569999 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/client.crt with IP's: []
	I0929 14:32:06.185466 1569999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/client.crt ...
	I0929 14:32:06.185503 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/client.crt: {Name:mk9b46cebb44af76d6da725c68cf09840dfae6b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:06.185705 1569999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/client.key ...
	I0929 14:32:06.185719 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/client.key: {Name:mk393f4d4b63004765715b26a3af4d43883cae74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:06.185812 1569999 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.key.70791d2d
	I0929 14:32:06.185832 1569999 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.crt.70791d2d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I0929 14:32:06.969732 1569999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.crt.70791d2d ...
	I0929 14:32:06.969763 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.crt.70791d2d: {Name:mka9bc73fc3b2c50b040b6eb3a2a1f8922281b61 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:06.969953 1569999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.key.70791d2d ...
	I0929 14:32:06.969968 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.key.70791d2d: {Name:mk416cfac7c018058078fd0d729cf806b6b9cbe0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:06.970046 1569999 certs.go:381] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.crt.70791d2d -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.crt
	I0929 14:32:06.970126 1569999 certs.go:385] copying /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.key.70791d2d -> /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.key
	I0929 14:32:06.970186 1569999 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.key
	I0929 14:32:06.970204 1569999 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.crt with IP's: []
	I0929 14:32:07.436038 1569999 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.crt ...
	I0929 14:32:07.436070 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.crt: {Name:mka14842486fce7e6083b8c13b0428fcf8c1a4cb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:07.436264 1569999 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.key ...
	I0929 14:32:07.436278 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.key: {Name:mk8553809d35f47c6377f88559dc3460bb413f6e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:07.436458 1569999 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:32:07.436517 1569999 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:32:07.436529 1569999 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:32:07.436555 1569999 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:32:07.436582 1569999 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:32:07.436607 1569999 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:32:07.436655 1569999 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:32:07.437283 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:32:07.462723 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:32:07.489499 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:32:07.515633 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:32:07.551448 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0929 14:32:07.580429 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0929 14:32:07.612267 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:32:07.636605 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/embed-certs-641794/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:32:07.662455 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:32:07.687979 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:32:07.713288 1569999 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:32:07.740551 1569999 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:32:07.760084 1569999 ssh_runner.go:195] Run: openssl version
	I0929 14:32:07.766058 1569999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:32:07.775532 1569999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:32:07.778869 1569999 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:32:07.778930 1569999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:32:07.786251 1569999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:32:07.795789 1569999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:32:07.805348 1569999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:32:07.809424 1569999 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:32:07.809515 1569999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:32:07.819159 1569999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:32:07.828994 1569999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:32:07.839228 1569999 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:32:07.843057 1569999 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:32:07.843122 1569999 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:32:07.850139 1569999 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:32:07.859985 1569999 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:32:07.863416 1569999 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0929 14:32:07.863469 1569999 kubeadm.go:392] StartCluster: {Name:embed-certs-641794 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:embed-certs-641794 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath
: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:32:07.863590 1569999 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:32:07.881781 1569999 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:32:07.893023 1569999 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0929 14:32:07.902538 1569999 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0929 14:32:07.902649 1569999 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0929 14:32:07.911870 1569999 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0929 14:32:07.911891 1569999 kubeadm.go:157] found existing configuration files:
	
	I0929 14:32:07.911944 1569999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0929 14:32:07.921375 1569999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0929 14:32:07.921445 1569999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0929 14:32:07.930110 1569999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0929 14:32:07.939269 1569999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0929 14:32:07.939356 1569999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0929 14:32:07.948043 1569999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0929 14:32:07.957355 1569999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0929 14:32:07.957441 1569999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0929 14:32:07.965994 1569999 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0929 14:32:07.974984 1569999 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0929 14:32:07.975096 1569999 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0929 14:32:07.983746 1569999 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0929 14:32:08.059129 1569999 kubeadm.go:310] 	[WARNING SystemVerification]: cgroups v1 support is in maintenance mode, please migrate to cgroups v2
	I0929 14:32:08.059537 1569999 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1084-aws\n", err: exit status 1
	I0929 14:32:08.133272 1569999 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0929 14:32:25.136082 1569999 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0929 14:32:25.136144 1569999 kubeadm.go:310] [preflight] Running pre-flight checks
	I0929 14:32:25.136243 1569999 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0929 14:32:25.136306 1569999 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1084-aws
	I0929 14:32:25.136346 1569999 kubeadm.go:310] OS: Linux
	I0929 14:32:25.136396 1569999 kubeadm.go:310] CGROUPS_CPU: enabled
	I0929 14:32:25.136462 1569999 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0929 14:32:25.136570 1569999 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0929 14:32:25.136627 1569999 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0929 14:32:25.136686 1569999 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0929 14:32:25.136744 1569999 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0929 14:32:25.136794 1569999 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0929 14:32:25.136848 1569999 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0929 14:32:25.136900 1569999 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0929 14:32:25.136978 1569999 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0929 14:32:25.137078 1569999 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0929 14:32:25.137182 1569999 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0929 14:32:25.137250 1569999 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0929 14:32:25.140322 1569999 out.go:252]   - Generating certificates and keys ...
	I0929 14:32:25.140429 1569999 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0929 14:32:25.140500 1569999 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0929 14:32:25.140583 1569999 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0929 14:32:25.140648 1569999 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0929 14:32:25.140730 1569999 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0929 14:32:25.140797 1569999 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0929 14:32:25.140854 1569999 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0929 14:32:25.140977 1569999 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-641794 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0929 14:32:25.141032 1569999 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0929 14:32:25.141185 1569999 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-641794 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I0929 14:32:25.141254 1569999 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0929 14:32:25.141320 1569999 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0929 14:32:25.141366 1569999 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0929 14:32:25.141425 1569999 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0929 14:32:25.141478 1569999 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0929 14:32:25.141537 1569999 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0929 14:32:25.141616 1569999 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0929 14:32:25.141684 1569999 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0929 14:32:25.141741 1569999 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0929 14:32:25.141825 1569999 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0929 14:32:25.141894 1569999 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0929 14:32:25.145039 1569999 out.go:252]   - Booting up control plane ...
	I0929 14:32:25.145236 1569999 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0929 14:32:25.145367 1569999 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0929 14:32:25.145483 1569999 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0929 14:32:25.145643 1569999 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0929 14:32:25.145784 1569999 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0929 14:32:25.145904 1569999 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0929 14:32:25.146005 1569999 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0929 14:32:25.146052 1569999 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0929 14:32:25.146206 1569999 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0929 14:32:25.146321 1569999 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0929 14:32:25.146392 1569999 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.00105712s
	I0929 14:32:25.146494 1569999 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0929 14:32:25.146584 1569999 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.85.2:8443/livez
	I0929 14:32:25.146683 1569999 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0929 14:32:25.146772 1569999 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0929 14:32:25.146856 1569999 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 3.128169656s
	I0929 14:32:25.146931 1569999 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 5.155635934s
	I0929 14:32:25.147006 1569999 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 7.001783939s
	I0929 14:32:25.147130 1569999 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0929 14:32:25.147267 1569999 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0929 14:32:25.147343 1569999 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0929 14:32:25.147545 1569999 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-641794 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0929 14:32:25.147609 1569999 kubeadm.go:310] [bootstrap-token] Using token: 3xof8m.cbctonfsm5gulrnb
	I0929 14:32:25.150544 1569999 out.go:252]   - Configuring RBAC rules ...
	I0929 14:32:25.150750 1569999 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0929 14:32:25.150939 1569999 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0929 14:32:25.151085 1569999 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0929 14:32:25.151213 1569999 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0929 14:32:25.151327 1569999 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0929 14:32:25.151412 1569999 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0929 14:32:25.151525 1569999 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0929 14:32:25.151569 1569999 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0929 14:32:25.151614 1569999 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0929 14:32:25.151618 1569999 kubeadm.go:310] 
	I0929 14:32:25.151678 1569999 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0929 14:32:25.151683 1569999 kubeadm.go:310] 
	I0929 14:32:25.151759 1569999 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0929 14:32:25.151763 1569999 kubeadm.go:310] 
	I0929 14:32:25.151789 1569999 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0929 14:32:25.151847 1569999 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0929 14:32:25.151897 1569999 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0929 14:32:25.151904 1569999 kubeadm.go:310] 
	I0929 14:32:25.151959 1569999 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0929 14:32:25.151963 1569999 kubeadm.go:310] 
	I0929 14:32:25.152010 1569999 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0929 14:32:25.152014 1569999 kubeadm.go:310] 
	I0929 14:32:25.152066 1569999 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0929 14:32:25.152140 1569999 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0929 14:32:25.152208 1569999 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0929 14:32:25.152212 1569999 kubeadm.go:310] 
	I0929 14:32:25.152294 1569999 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0929 14:32:25.152371 1569999 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0929 14:32:25.152375 1569999 kubeadm.go:310] 
	I0929 14:32:25.152459 1569999 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 3xof8m.cbctonfsm5gulrnb \
	I0929 14:32:25.152707 1569999 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b \
	I0929 14:32:25.152754 1569999 kubeadm.go:310] 	--control-plane 
	I0929 14:32:25.152774 1569999 kubeadm.go:310] 
	I0929 14:32:25.152885 1569999 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0929 14:32:25.152904 1569999 kubeadm.go:310] 
	I0929 14:32:25.153001 1569999 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 3xof8m.cbctonfsm5gulrnb \
	I0929 14:32:25.153133 1569999 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:0ab4ad05387d2b551732906ec22c7c0fb9e787b40623069ae285559494ddfa4b 
	I0929 14:32:25.153158 1569999 cni.go:84] Creating CNI manager for ""
	I0929 14:32:25.153173 1569999 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:32:25.158069 1569999 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I0929 14:32:25.161056 1569999 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0929 14:32:25.174450 1569999 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0929 14:32:25.198096 1569999 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0929 14:32:25.198300 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:25.198398 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-641794 minikube.k8s.io/updated_at=2025_09_29T14_32_25_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e minikube.k8s.io/name=embed-certs-641794 minikube.k8s.io/primary=true
	I0929 14:32:25.213820 1569999 ops.go:34] apiserver oom_adj: -16
	I0929 14:32:25.334886 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:25.834996 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:26.335809 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:26.835005 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:27.335215 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:27.834978 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:28.335653 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:28.835649 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:29.335059 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:29.835307 1569999 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0929 14:32:30.073768 1569999 kubeadm.go:1105] duration metric: took 4.875558878s to wait for elevateKubeSystemPrivileges
	I0929 14:32:30.073801 1569999 kubeadm.go:394] duration metric: took 22.210337705s to StartCluster
	I0929 14:32:30.073823 1569999 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:30.073901 1569999 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:32:30.075365 1569999 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:32:30.075629 1569999 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:32:30.075978 1569999 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0929 14:32:30.076191 1569999 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:32:30.076275 1569999 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-641794"
	I0929 14:32:30.076310 1569999 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-641794"
	I0929 14:32:30.076317 1569999 config.go:182] Loaded profile config "embed-certs-641794": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:32:30.076344 1569999 host.go:66] Checking if "embed-certs-641794" exists ...
	I0929 14:32:30.076360 1569999 addons.go:69] Setting default-storageclass=true in profile "embed-certs-641794"
	I0929 14:32:30.076373 1569999 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-641794"
	I0929 14:32:30.076744 1569999 cli_runner.go:164] Run: docker container inspect embed-certs-641794 --format={{.State.Status}}
	I0929 14:32:30.076880 1569999 cli_runner.go:164] Run: docker container inspect embed-certs-641794 --format={{.State.Status}}
	I0929 14:32:30.089481 1569999 out.go:179] * Verifying Kubernetes components...
	I0929 14:32:30.098418 1569999 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:32:30.101694 1569999 addons.go:238] Setting addon default-storageclass=true in "embed-certs-641794"
	I0929 14:32:30.101743 1569999 host.go:66] Checking if "embed-certs-641794" exists ...
	I0929 14:32:30.102255 1569999 cli_runner.go:164] Run: docker container inspect embed-certs-641794 --format={{.State.Status}}
	I0929 14:32:30.136184 1569999 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:32:30.139193 1569999 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:32:30.139216 1569999 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:32:30.139284 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:30.146291 1569999 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:32:30.146313 1569999 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:32:30.146373 1569999 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-641794
	I0929 14:32:30.178597 1569999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa Username:docker}
	I0929 14:32:30.190397 1569999 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34296 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/embed-certs-641794/id_rsa Username:docker}
	I0929 14:32:30.505427 1569999 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0929 14:32:30.505619 1569999 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:32:30.510040 1569999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:32:30.523107 1569999 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:32:31.519992 1569999 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.014329308s)
	I0929 14:32:31.521115 1569999 node_ready.go:35] waiting up to 6m0s for node "embed-certs-641794" to be "Ready" ...
	I0929 14:32:31.521455 1569999 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.015953129s)
	I0929 14:32:31.521483 1569999 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0929 14:32:31.522764 1569999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.012691069s)
	I0929 14:32:31.567776 1569999 node_ready.go:49] node "embed-certs-641794" is "Ready"
	I0929 14:32:31.567809 1569999 node_ready.go:38] duration metric: took 46.657009ms for node "embed-certs-641794" to be "Ready" ...
	I0929 14:32:31.567825 1569999 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:32:31.567885 1569999 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:32:31.707013 1569999 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.183813772s)
	I0929 14:32:31.707343 1569999 api_server.go:72] duration metric: took 1.631663176s to wait for apiserver process to appear ...
	I0929 14:32:31.707379 1569999 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:32:31.707425 1569999 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0929 14:32:31.710384 1569999 out.go:179] * Enabled addons: default-storageclass, storage-provisioner
	I0929 14:32:31.712859 1569999 addons.go:514] duration metric: took 1.636623725s for enable addons: enabled=[default-storageclass storage-provisioner]
	I0929 14:32:31.719233 1569999 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0929 14:32:31.721208 1569999 api_server.go:141] control plane version: v1.34.0
	I0929 14:32:31.721239 1569999 api_server.go:131] duration metric: took 13.84157ms to wait for apiserver health ...
	I0929 14:32:31.721249 1569999 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:32:31.730927 1569999 system_pods.go:59] 8 kube-system pods found
	I0929 14:32:31.730971 1569999 system_pods.go:61] "coredns-66bc5c9577-hgmgn" [0e17d1ce-cd6d-4621-9596-574d51e3f08c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:31.730983 1569999 system_pods.go:61] "coredns-66bc5c9577-hmpmx" [6ffce701-7f9f-4865-af31-00b41ee46fee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:31.730989 1569999 system_pods.go:61] "etcd-embed-certs-641794" [774fb94d-c174-4ba0-9f60-da7bcf07a4fe] Running
	I0929 14:32:31.730996 1569999 system_pods.go:61] "kube-apiserver-embed-certs-641794" [8a95eeba-8c8c-47d6-a18d-d43f0e6f3e59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:32:31.731007 1569999 system_pods.go:61] "kube-controller-manager-embed-certs-641794" [e957dde8-7930-44ed-af85-9b98ffa8af89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:32:31.731014 1569999 system_pods.go:61] "kube-proxy-hq49j" [c4564115-c3eb-44f0-b7ec-d59b7fe6fed2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:32:31.731018 1569999 system_pods.go:61] "kube-scheduler-embed-certs-641794" [c24bc2aa-4d47-4608-b137-3a643e6f5ad0] Running
	I0929 14:32:31.731025 1569999 system_pods.go:61] "storage-provisioner" [f230e9c0-0324-4ade-8af5-a35a02908577] Pending
	I0929 14:32:31.731031 1569999 system_pods.go:74] duration metric: took 9.777105ms to wait for pod list to return data ...
	I0929 14:32:31.731047 1569999 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:32:31.734023 1569999 default_sa.go:45] found service account: "default"
	I0929 14:32:31.734051 1569999 default_sa.go:55] duration metric: took 2.996514ms for default service account to be created ...
	I0929 14:32:31.734061 1569999 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:32:31.737695 1569999 system_pods.go:86] 8 kube-system pods found
	I0929 14:32:31.737730 1569999 system_pods.go:89] "coredns-66bc5c9577-hgmgn" [0e17d1ce-cd6d-4621-9596-574d51e3f08c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:31.737739 1569999 system_pods.go:89] "coredns-66bc5c9577-hmpmx" [6ffce701-7f9f-4865-af31-00b41ee46fee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:31.737744 1569999 system_pods.go:89] "etcd-embed-certs-641794" [774fb94d-c174-4ba0-9f60-da7bcf07a4fe] Running
	I0929 14:32:31.737756 1569999 system_pods.go:89] "kube-apiserver-embed-certs-641794" [8a95eeba-8c8c-47d6-a18d-d43f0e6f3e59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:32:31.737771 1569999 system_pods.go:89] "kube-controller-manager-embed-certs-641794" [e957dde8-7930-44ed-af85-9b98ffa8af89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:32:31.737779 1569999 system_pods.go:89] "kube-proxy-hq49j" [c4564115-c3eb-44f0-b7ec-d59b7fe6fed2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:32:31.737786 1569999 system_pods.go:89] "kube-scheduler-embed-certs-641794" [c24bc2aa-4d47-4608-b137-3a643e6f5ad0] Running
	I0929 14:32:31.737792 1569999 system_pods.go:89] "storage-provisioner" [f230e9c0-0324-4ade-8af5-a35a02908577] Pending
	I0929 14:32:31.737818 1569999 retry.go:31] will retry after 218.659017ms: missing components: kube-dns, kube-proxy
	I0929 14:32:31.967420 1569999 system_pods.go:86] 8 kube-system pods found
	I0929 14:32:31.967527 1569999 system_pods.go:89] "coredns-66bc5c9577-hgmgn" [0e17d1ce-cd6d-4621-9596-574d51e3f08c] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:31.967559 1569999 system_pods.go:89] "coredns-66bc5c9577-hmpmx" [6ffce701-7f9f-4865-af31-00b41ee46fee] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:31.967597 1569999 system_pods.go:89] "etcd-embed-certs-641794" [774fb94d-c174-4ba0-9f60-da7bcf07a4fe] Running
	I0929 14:32:31.967658 1569999 system_pods.go:89] "kube-apiserver-embed-certs-641794" [8a95eeba-8c8c-47d6-a18d-d43f0e6f3e59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:32:31.967685 1569999 system_pods.go:89] "kube-controller-manager-embed-certs-641794" [e957dde8-7930-44ed-af85-9b98ffa8af89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:32:31.967737 1569999 system_pods.go:89] "kube-proxy-hq49j" [c4564115-c3eb-44f0-b7ec-d59b7fe6fed2] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:32:31.967764 1569999 system_pods.go:89] "kube-scheduler-embed-certs-641794" [c24bc2aa-4d47-4608-b137-3a643e6f5ad0] Running
	I0929 14:32:31.967802 1569999 system_pods.go:89] "storage-provisioner" [f230e9c0-0324-4ade-8af5-a35a02908577] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:32:31.967847 1569999 retry.go:31] will retry after 287.666279ms: missing components: kube-dns, kube-proxy
	I0929 14:32:32.030479 1569999 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-641794" context rescaled to 1 replicas
	I0929 14:32:32.272547 1569999 system_pods.go:86] 8 kube-system pods found
	I0929 14:32:32.272640 1569999 system_pods.go:89] "coredns-66bc5c9577-hgmgn" [0e17d1ce-cd6d-4621-9596-574d51e3f08c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:32.272665 1569999 system_pods.go:89] "coredns-66bc5c9577-hmpmx" [6ffce701-7f9f-4865-af31-00b41ee46fee] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:32:32.272704 1569999 system_pods.go:89] "etcd-embed-certs-641794" [774fb94d-c174-4ba0-9f60-da7bcf07a4fe] Running
	I0929 14:32:32.272731 1569999 system_pods.go:89] "kube-apiserver-embed-certs-641794" [8a95eeba-8c8c-47d6-a18d-d43f0e6f3e59] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:32:32.272758 1569999 system_pods.go:89] "kube-controller-manager-embed-certs-641794" [e957dde8-7930-44ed-af85-9b98ffa8af89] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:32:32.272778 1569999 system_pods.go:89] "kube-proxy-hq49j" [c4564115-c3eb-44f0-b7ec-d59b7fe6fed2] Running
	I0929 14:32:32.272806 1569999 system_pods.go:89] "kube-scheduler-embed-certs-641794" [c24bc2aa-4d47-4608-b137-3a643e6f5ad0] Running
	I0929 14:32:32.272833 1569999 system_pods.go:89] "storage-provisioner" [f230e9c0-0324-4ade-8af5-a35a02908577] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:32:32.272859 1569999 system_pods.go:126] duration metric: took 538.787296ms to wait for k8s-apps to be running ...
	I0929 14:32:32.272881 1569999 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:32:32.272962 1569999 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:32:32.293132 1569999 system_svc.go:56] duration metric: took 20.239091ms WaitForService to wait for kubelet
	I0929 14:32:32.293215 1569999 kubeadm.go:578] duration metric: took 2.217536352s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:32:32.293251 1569999 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:32:32.296281 1569999 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:32:32.296361 1569999 node_conditions.go:123] node cpu capacity is 2
	I0929 14:32:32.296389 1569999 node_conditions.go:105] duration metric: took 3.116696ms to run NodePressure ...
	I0929 14:32:32.296414 1569999 start.go:241] waiting for startup goroutines ...
	I0929 14:32:32.296450 1569999 start.go:246] waiting for cluster config update ...
	I0929 14:32:32.296475 1569999 start.go:255] writing updated cluster config ...
	I0929 14:32:32.296793 1569999 ssh_runner.go:195] Run: rm -f paused
	I0929 14:32:32.305751 1569999 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:32:32.360769 1569999 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hgmgn" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:32:34.366979 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hgmgn" is not "Ready", error: <nil>
	W0929 14:32:36.367280 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hgmgn" is not "Ready", error: <nil>
	W0929 14:32:38.367761 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hgmgn" is not "Ready", error: <nil>
	W0929 14:32:40.866304 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hgmgn" is not "Ready", error: <nil>
	I0929 14:32:42.363607 1569999 pod_ready.go:99] pod "coredns-66bc5c9577-hgmgn" in "kube-system" namespace is gone: getting pod "coredns-66bc5c9577-hgmgn" in "kube-system" namespace (will retry): pods "coredns-66bc5c9577-hgmgn" not found
	I0929 14:32:42.363637 1569999 pod_ready.go:86] duration metric: took 10.002823585s for pod "coredns-66bc5c9577-hgmgn" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:32:42.363650 1569999 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-hmpmx" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:32:44.369426 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	W0929 14:32:46.371265 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	W0929 14:32:48.869300 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	W0929 14:32:51.369517 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	W0929 14:32:53.869741 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	W0929 14:32:56.369753 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	W0929 14:32:58.869865 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	W0929 14:33:01.369741 1569999 pod_ready.go:104] pod "coredns-66bc5c9577-hmpmx" is not "Ready", error: <nil>
	I0929 14:33:03.369659 1569999 pod_ready.go:94] pod "coredns-66bc5c9577-hmpmx" is "Ready"
	I0929 14:33:03.369688 1569999 pod_ready.go:86] duration metric: took 21.006031129s for pod "coredns-66bc5c9577-hmpmx" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:03.372803 1569999 pod_ready.go:83] waiting for pod "etcd-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:03.378278 1569999 pod_ready.go:94] pod "etcd-embed-certs-641794" is "Ready"
	I0929 14:33:03.378307 1569999 pod_ready.go:86] duration metric: took 5.47898ms for pod "etcd-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:03.380887 1569999 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:03.385866 1569999 pod_ready.go:94] pod "kube-apiserver-embed-certs-641794" is "Ready"
	I0929 14:33:03.385895 1569999 pod_ready.go:86] duration metric: took 4.982629ms for pod "kube-apiserver-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:03.388627 1569999 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:03.567231 1569999 pod_ready.go:94] pod "kube-controller-manager-embed-certs-641794" is "Ready"
	I0929 14:33:03.567264 1569999 pod_ready.go:86] duration metric: took 178.609773ms for pod "kube-controller-manager-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:03.767816 1569999 pod_ready.go:83] waiting for pod "kube-proxy-hq49j" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:04.167094 1569999 pod_ready.go:94] pod "kube-proxy-hq49j" is "Ready"
	I0929 14:33:04.167124 1569999 pod_ready.go:86] duration metric: took 399.281745ms for pod "kube-proxy-hq49j" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:04.367395 1569999 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:04.767673 1569999 pod_ready.go:94] pod "kube-scheduler-embed-certs-641794" is "Ready"
	I0929 14:33:04.767703 1569999 pod_ready.go:86] duration metric: took 400.282166ms for pod "kube-scheduler-embed-certs-641794" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:33:04.767716 1569999 pod_ready.go:40] duration metric: took 32.46188992s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:33:04.841892 1569999 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:33:04.845233 1569999 out.go:179] * Done! kubectl is now configured to use "embed-certs-641794" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.589762196Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.592813454Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:20:01 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:01.592855851Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:20:18 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:18.220808175Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:20:18 no-preload-983174 dockerd[893]: time="2025-09-29T14:20:18.303590758Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:25:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:14.180626656Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:25:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:14.180666755Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:25:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:14.183657895Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:25:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:14.183697559Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:25:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:14.399941464Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:25:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:14.592665620Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:25:14 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:14.592769605Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:25:14 no-preload-983174 cri-dockerd[1211]: time="2025-09-29T14:25:14Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:25:25 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:25.214070641Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:25:25 no-preload-983174 dockerd[893]: time="2025-09-29T14:25:25.303885031Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:30:17 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:17.389990575Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:30:17 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:17.574081922Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:30:17 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:17.574179917Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:30:17 no-preload-983174 cri-dockerd[1211]: time="2025-09-29T14:30:17Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:30:17 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:17.585602136Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:30:17 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:17.585640216Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:30:17 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:17.588385568Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:30:17 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:17.588425954Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:30:28 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:28.204703591Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:30:28 no-preload-983174 dockerd[893]: time="2025-09-29T14:30:28.301938874Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0f3eaee26dfbe       66749159455b3                                                                                         18 minutes ago      Running             storage-provisioner       3                   e1a72773bfa10       storage-provisioner
	12f24daebca75       138784d87c9c5                                                                                         18 minutes ago      Running             coredns                   1                   44bf744032360       coredns-66bc5c9577-846n7
	77c7f00743aa5       1611cd07b61d5                                                                                         18 minutes ago      Running             busybox                   1                   ac1d7c0d3591d       busybox
	d909070e1391e       6fc32d66c1411                                                                                         18 minutes ago      Running             kube-proxy                1                   af0122fda25d3       kube-proxy-rjpsv
	19afabc4b49f0       66749159455b3                                                                                         18 minutes ago      Exited              storage-provisioner       2                   e1a72773bfa10       storage-provisioner
	498c1ebdc119d       d291939e99406                                                                                         18 minutes ago      Running             kube-apiserver            1                   4000c3e6ecb98       kube-apiserver-no-preload-983174
	c5c159be5364e       996be7e86d9b3                                                                                         18 minutes ago      Running             kube-controller-manager   1                   2919f7749a9e1       kube-controller-manager-no-preload-983174
	a935dc35fdae2       a25f5ef9c34c3                                                                                         18 minutes ago      Running             kube-scheduler            1                   78e3a54cf3adf       kube-scheduler-no-preload-983174
	2c81a2420c7b3       a1894772a478e                                                                                         18 minutes ago      Running             etcd                      1                   c571121b4eb3c       etcd-no-preload-983174
	ca1a7d70e46d1       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              busybox                   0                   0a101000dc92b       busybox
	c2d36972d1b2b       138784d87c9c5                                                                                         19 minutes ago      Exited              coredns                   0                   d63538fd5fb45       coredns-66bc5c9577-846n7
	b1715dc9052f2       6fc32d66c1411                                                                                         19 minutes ago      Exited              kube-proxy                0                   1b91062fbe529       kube-proxy-rjpsv
	5754075776ddd       996be7e86d9b3                                                                                         20 minutes ago      Exited              kube-controller-manager   0                   84989d2afdf58       kube-controller-manager-no-preload-983174
	adf045c7d8305       a25f5ef9c34c3                                                                                         20 minutes ago      Exited              kube-scheduler            0                   d54cd597560fd       kube-scheduler-no-preload-983174
	5d5403194c3fc       d291939e99406                                                                                         20 minutes ago      Exited              kube-apiserver            0                   69eaf13796713       kube-apiserver-no-preload-983174
	be788d93ba9f2       a1894772a478e                                                                                         20 minutes ago      Exited              etcd                      0                   bf6f76dda718c       etcd-no-preload-983174
	
	
	==> coredns [12f24daebca7] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:35857 - 7983 "HINFO IN 4084271001323329853.8401138301617600447. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.025371055s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [c2d36972d1b2] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	[INFO] Reloading complete
	[INFO] 127.0.0.1:33520 - 54257 "HINFO IN 4588669308009460363.8948613153329900029. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.033441071s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               no-preload-983174
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=no-preload-983174
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=no-preload-983174
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_13_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:13:04 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-983174
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:32:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:30:32 +0000   Mon, 29 Sep 2025 14:13:00 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:30:32 +0000   Mon, 29 Sep 2025 14:13:00 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:30:32 +0000   Mon, 29 Sep 2025 14:13:00 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:30:32 +0000   Mon, 29 Sep 2025 14:13:04 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    no-preload-983174
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 b8a151a63bd744a6813e9ba30655565b
	  System UUID:                4c406f57-abce-4a9d-b98b-1bca4b1d2f5e
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-846n7                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-no-preload-983174                        100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kube-apiserver-no-preload-983174              250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-no-preload-983174     200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-rjpsv                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-no-preload-983174              100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-6pt8w               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-srp8w    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-kpkl2         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node no-preload-983174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node no-preload-983174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node no-preload-983174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node no-preload-983174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node no-preload-983174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     19m                kubelet          Node no-preload-983174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           19m                node-controller  Node no-preload-983174 event: Registered Node no-preload-983174 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-983174 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-983174 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-983174 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node no-preload-983174 event: Registered Node no-preload-983174 in Controller
	
	
	==> dmesg <==
	[Sep29 13:01] kauditd_printk_skb: 8 callbacks suppressed
	
	
	==> etcd [2c81a2420c7b] <==
	{"level":"warn","ts":"2025-09-29T14:14:22.728400Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.768113Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.789405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.810549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.837667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38012","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.874652Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.892958Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38036","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.915415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.925929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.952069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.981654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:22.996297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38114","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.027497Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.045056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.078307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38168","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.107435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.186436Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38196","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.202770Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38220","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:14:23.279520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38234","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:24:21.420903Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1073}
	{"level":"info","ts":"2025-09-29T14:24:21.567877Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1073,"took":"146.645453ms","hash":2394303440,"current-db-size-bytes":3198976,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1380352,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-29T14:24:21.567951Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2394303440,"revision":1073,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T14:29:21.428059Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1327}
	{"level":"info","ts":"2025-09-29T14:29:21.431944Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1327,"took":"3.459535ms","hash":945387592,"current-db-size-bytes":3198976,"current-db-size":"3.2 MB","current-db-size-in-use-bytes":1822720,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T14:29:21.431993Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":945387592,"revision":1327,"compact-revision":1073}
	
	
	==> etcd [be788d93ba9f] <==
	{"level":"warn","ts":"2025-09-29T14:13:03.408426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.421712Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40078","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.445762Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40106","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.467629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40130","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.485624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40158","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.502805Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40182","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:13:03.614717Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40198","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:13:58.642170Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:13:58.642236Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"no-preload-983174","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-09-29T14:13:58.642343Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:13:59.721755Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:13:59.721836Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:13:59.721858Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-09-29T14:13:59.721959Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T14:13:59.721972Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722210Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722240Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:13:59.722248Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722286Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:13:59.722294Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:13:59.722300Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:13:59.725574Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-09-29T14:13:59.725644Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:13:59.725672Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-29T14:13:59.725678Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"no-preload-983174","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 14:33:07 up  6:15,  0 users,  load average: 1.27, 1.02, 2.02
	Linux no-preload-983174 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [498c1ebdc119] <==
	I0929 14:29:25.281281       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:29:46.048334       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:30:03.304478       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:30:25.280223       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:30:25.280308       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:30:25.280329       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:30:25.282400       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:30:25.282515       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:30:25.282568       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:30:48.900589       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:31:26.164871       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:32:13.896612       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:32:25.280992       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:32:25.281158       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:32:25.281283       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:32:25.283627       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:32:25.283891       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:32:25.284024       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:32:27.995987       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [5d5403194c3f] <==
	W0929 14:13:58.651831       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.651883       1 logging.go:55] [core] [Channel #91 SubChannel #93]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.651925       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.651966       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652008       1 logging.go:55] [core] [Channel #179 SubChannel #181]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652055       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652098       1 logging.go:55] [core] [Channel #243 SubChannel #245]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652145       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652191       1 logging.go:55] [core] [Channel #151 SubChannel #153]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652228       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652270       1 logging.go:55] [core] [Channel #21 SubChannel #23]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652308       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652347       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652385       1 logging.go:55] [core] [Channel #251 SubChannel #253]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652425       1 logging.go:55] [core] [Channel #35 SubChannel #37]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652461       1 logging.go:55] [core] [Channel #79 SubChannel #81]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.652498       1 logging.go:55] [core] [Channel #223 SubChannel #225]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653536       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653615       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653668       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653719       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653769       1 logging.go:55] [core] [Channel #199 SubChannel #201]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653819       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:13:58.653865       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0929 14:13:59.602355       1 cidrallocator.go:210] stopping ServiceCIDR Allocator Controller
	
	
	==> kube-controller-manager [5754075776dd] <==
	I0929 14:13:11.330682       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 14:13:11.340594       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0929 14:13:11.344047       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 14:13:11.344357       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0929 14:13:11.344370       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0929 14:13:11.344393       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 14:13:11.344403       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0929 14:13:11.345010       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 14:13:11.345077       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 14:13:11.345484       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 14:13:11.345525       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 14:13:11.345536       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 14:13:11.345547       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 14:13:11.345554       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 14:13:11.345561       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 14:13:11.345571       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 14:13:11.350337       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0929 14:13:11.345605       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 14:13:11.362515       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 14:13:11.362972       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-983174" podCIDRs=["10.244.0.0/24"]
	I0929 14:13:11.377760       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0929 14:13:11.399898       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:13:11.439972       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:13:11.439996       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 14:13:11.440004       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [c5c159be5364] <==
	I0929 14:26:59.930311       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:27:29.810632       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:27:29.938476       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:27:59.815569       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:27:59.946485       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:28:29.829578       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:28:29.955287       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:28:59.834504       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:28:59.963154       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:29:29.838541       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:29:29.971063       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:29:59.843353       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:29:59.978365       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:30:29.852918       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:30:29.986010       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:30:59.857026       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:30:59.995119       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:31:29.861965       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:31:30.030550       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:31:59.867069       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:32:00.056801       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:32:29.904047       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:32:30.066769       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:32:59.908408       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:33:00.141636       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [b1715dc9052f] <==
	I0929 14:13:13.789527       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:13:13.894173       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:13:13.995056       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:13:13.995115       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:13:13.995222       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:13:14.031622       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:13:14.031764       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:13:14.045417       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:13:14.048968       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:13:14.049149       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:13:14.051260       1 config.go:200] "Starting service config controller"
	I0929 14:13:14.051471       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:13:14.051498       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:13:14.051502       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:13:14.051514       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:13:14.051522       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:13:14.056589       1 config.go:309] "Starting node config controller"
	I0929 14:13:14.056610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:13:14.056618       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:13:14.152324       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:13:14.152326       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:13:14.152369       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [d909070e1391] <==
	I0929 14:14:26.511941       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:14:26.582542       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:14:26.688725       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:14:26.688767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:14:26.688844       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:14:26.729752       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:14:26.729809       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:14:26.734039       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:14:26.734597       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:14:26.734622       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:14:26.736339       1 config.go:200] "Starting service config controller"
	I0929 14:14:26.736363       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:14:26.736390       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:14:26.736394       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:14:26.736571       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:14:26.736589       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:14:26.741317       1 config.go:309] "Starting node config controller"
	I0929 14:14:26.741342       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:14:26.741350       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:14:26.836902       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:14:26.836909       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:14:26.836951       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [a935dc35fdae] <==
	I0929 14:14:22.833590       1 serving.go:386] Generated self-signed cert in-memory
	W0929 14:14:24.232403       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 14:14:24.232441       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:14:24.232452       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 14:14:24.232460       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 14:14:24.316179       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 14:14:24.316210       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:14:24.319596       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 14:14:24.319716       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:14:24.319734       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:14:24.319750       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 14:14:24.421057       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [adf045c7d830] <==
	E0929 14:13:04.415937       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:13:04.415973       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 14:13:04.416017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 14:13:04.416064       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 14:13:04.416214       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:13:04.416382       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:13:04.416455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:13:05.235830       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:13:05.286231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:13:05.302609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:13:05.391347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:13:05.418259       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:13:05.433615       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:13:05.440133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:13:05.449255       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 14:13:05.452759       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 14:13:05.599585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:13:05.737315       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	I0929 14:13:08.648329       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:13:58.530875       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 14:13:58.536043       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:13:58.536178       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 14:13:58.536194       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 14:13:58.536229       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 14:13:58.540922       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 14:31:24 no-preload-983174 kubelet[1389]: E0929 14:31:24.157391    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:31:26 no-preload-983174 kubelet[1389]: E0929 14:31:26.157618    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:31:31 no-preload-983174 kubelet[1389]: E0929 14:31:31.157410    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:31:35 no-preload-983174 kubelet[1389]: E0929 14:31:35.156524    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:31:37 no-preload-983174 kubelet[1389]: E0929 14:31:37.157484    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:31:43 no-preload-983174 kubelet[1389]: E0929 14:31:43.156915    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:31:47 no-preload-983174 kubelet[1389]: E0929 14:31:47.158431    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:31:51 no-preload-983174 kubelet[1389]: E0929 14:31:51.156731    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:31:58 no-preload-983174 kubelet[1389]: E0929 14:31:58.164694    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:31:59 no-preload-983174 kubelet[1389]: E0929 14:31:59.157722    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:32:04 no-preload-983174 kubelet[1389]: E0929 14:32:04.165260    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:32:11 no-preload-983174 kubelet[1389]: E0929 14:32:11.156654    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:32:14 no-preload-983174 kubelet[1389]: E0929 14:32:14.162330    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:32:15 no-preload-983174 kubelet[1389]: E0929 14:32:15.157545    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:32:25 no-preload-983174 kubelet[1389]: E0929 14:32:25.156588    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:32:26 no-preload-983174 kubelet[1389]: E0929 14:32:26.162983    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:32:28 no-preload-983174 kubelet[1389]: E0929 14:32:28.160715    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:32:39 no-preload-983174 kubelet[1389]: E0929 14:32:39.157088    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:32:40 no-preload-983174 kubelet[1389]: E0929 14:32:40.158534    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:32:41 no-preload-983174 kubelet[1389]: E0929 14:32:41.157102    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:32:54 no-preload-983174 kubelet[1389]: E0929 14:32:54.159909    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-kpkl2" podUID="80983d01-da8e-4456-bdd9-c6b9c062762d"
	Sep 29 14:32:54 no-preload-983174 kubelet[1389]: E0929 14:32:54.160978    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	Sep 29 14:32:55 no-preload-983174 kubelet[1389]: E0929 14:32:55.157686    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:33:07 no-preload-983174 kubelet[1389]: E0929 14:33:07.157768    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-srp8w" podUID="f8f62c7d-f38d-47f2-bbe6-65e0d812ad2c"
	Sep 29 14:33:07 no-preload-983174 kubelet[1389]: E0929 14:33:07.158113    1389 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-6pt8w" podUID="db3c374a-7d3e-4ebd-9a71-c1245d62d2ec"
	
	
	==> storage-provisioner [0f3eaee26dfb] <==
	W0929 14:32:42.041738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:44.045702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:44.052735       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:46.056579       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:46.061531       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:48.065582       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:48.070684       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:50.074371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:50.079509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:52.083205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:52.090881       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:54.094199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:54.099369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:56.103081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:56.110423       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:58.113826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:32:58.118705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:00.131186       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:00.169690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:02.172575       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:02.177756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:04.181107       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:04.185659       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:06.189901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:33:06.196183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [19afabc4b49f] <==
	I0929 14:14:26.359914       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:14:27.368787       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-983174 -n no-preload-983174
E0929 14:33:07.655121 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-983174 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-983174 describe pod metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-983174 describe pod metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2: exit status 1 (139.286319ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-6pt8w" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-srp8w" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-kpkl2" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-983174 describe pod metrics-server-746fcd58dc-6pt8w dashboard-metrics-scraper-6ffb444bf9-srp8w kubernetes-dashboard-855c9754f9-kpkl2: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (543.34s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-47mqf" [da179c3b-5a5b-452e-9da4-57b22177fba3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-641794 -n embed-certs-641794
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:43:28.007489848 +0000 UTC m=+6101.272703267
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-641794 describe po kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context embed-certs-641794 describe po kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-47mqf
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-641794/192.168.85.2
Start Time:       Mon, 29 Sep 2025 14:33:53 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkbff (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-pkbff:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m35s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf to embed-certs-641794
Normal   Pulling    6m44s (x5 over 9m35s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m44s (x5 over 9m35s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m44s (x5 over 9m35s)   kubelet            Error: ErrImagePull
Warning  Failed     4m31s (x20 over 9m34s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m16s (x21 over 9m34s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context embed-certs-641794 logs kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context embed-certs-641794 logs kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard: exit status 1 (114.908473ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-47mqf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context embed-certs-641794 logs kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-641794
helpers_test.go:243: (dbg) docker inspect embed-certs-641794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24",
	        "Created": "2025-09-29T14:31:58.13596895Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1580073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:33:28.565493532Z",
	            "FinishedAt": "2025-09-29T14:33:27.626783812Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/hostname",
	        "HostsPath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/hosts",
	        "LogPath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24-json.log",
	        "Name": "/embed-certs-641794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-641794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-641794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24",
	                "LowerDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-641794",
	                "Source": "/var/lib/docker/volumes/embed-certs-641794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-641794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-641794",
	                "name.minikube.sigs.k8s.io": "embed-certs-641794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ab2ba6a1681f3a6c8cd864ec56c876c496c43306607503628dde6d15c66dd7c",
	            "SandboxKey": "/var/run/docker/netns/9ab2ba6a1681",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34306"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34307"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34308"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34309"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-641794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:8e:0b:08:c6:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74b92272e8b5ea380f2a2d8d88cf9058f170799fc14d15f976032de06e56e31f",
	                    "EndpointID": "3c69a303efe3b9fceec361df024343f7061d3e5f84cf3f88621ba1b0c92ed18c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-641794",
	                        "c24f0a72b686"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-641794 -n embed-certs-641794
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-641794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-641794 logs -n 25: (1.422595664s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ image   │ no-preload-983174 image list --format=json                                                                                                                                                                                                      │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ pause   │ -p no-preload-983174 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ unpause │ -p no-preload-983174 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p no-preload-983174                                                                                                                                                                                                                            │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p no-preload-983174                                                                                                                                                                                                                            │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p disable-driver-mounts-627946                                                                                                                                                                                                                 │ disable-driver-mounts-627946 │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-641794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p embed-certs-641794 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-641794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-093064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p newest-cni-093064 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-093064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ image   │ newest-cni-093064 image list --format=json                                                                                                                                                                                                      │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ pause   │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ unpause │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-186820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ stop    │ -p default-k8s-diff-port-186820 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-186820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:35:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:35:42.456122 1596062 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:35:42.456362 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456395 1596062 out.go:374] Setting ErrFile to fd 2...
	I0929 14:35:42.456415 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456738 1596062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:35:42.457163 1596062 out.go:368] Setting JSON to false
	I0929 14:35:42.458288 1596062 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22695,"bootTime":1759133848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:35:42.458402 1596062 start.go:140] virtualization:  
	I0929 14:35:42.462007 1596062 out.go:179] * [default-k8s-diff-port-186820] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:35:42.465793 1596062 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:35:42.465926 1596062 notify.go:220] Checking for updates...
	I0929 14:35:42.471729 1596062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:35:42.474683 1596062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:42.477543 1596062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:35:42.480431 1596062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:35:42.483237 1596062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:35:42.486711 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:42.487301 1596062 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:35:42.514877 1596062 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:35:42.515008 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.572860 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.562452461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.572973 1596062 docker.go:318] overlay module found
	I0929 14:35:42.576085 1596062 out.go:179] * Using the docker driver based on existing profile
	I0929 14:35:42.578939 1596062 start.go:304] selected driver: docker
	I0929 14:35:42.578961 1596062 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.579120 1596062 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:35:42.579853 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.635895 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.626575461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.636238 1596062 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:35:42.636278 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:42.636347 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:42.636386 1596062 start.go:348] cluster config:
	{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.641605 1596062 out.go:179] * Starting "default-k8s-diff-port-186820" primary control-plane node in "default-k8s-diff-port-186820" cluster
	I0929 14:35:42.645130 1596062 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:35:42.648466 1596062 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:35:42.651441 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:42.651462 1596062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:35:42.651506 1596062 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 14:35:42.651523 1596062 cache.go:58] Caching tarball of preloaded images
	I0929 14:35:42.651603 1596062 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 14:35:42.651613 1596062 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 14:35:42.651737 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:42.671234 1596062 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:35:42.671260 1596062 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:35:42.671281 1596062 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:35:42.671312 1596062 start.go:360] acquireMachinesLock for default-k8s-diff-port-186820: {Name:mk14ee05a72e1bc87d0193bcc4d30163df297691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:35:42.671385 1596062 start.go:364] duration metric: took 48.354µs to acquireMachinesLock for "default-k8s-diff-port-186820"
	I0929 14:35:42.671408 1596062 start.go:96] Skipping create...Using existing machine configuration
	I0929 14:35:42.671416 1596062 fix.go:54] fixHost starting: 
	I0929 14:35:42.671679 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:42.688259 1596062 fix.go:112] recreateIfNeeded on default-k8s-diff-port-186820: state=Stopped err=<nil>
	W0929 14:35:42.688293 1596062 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 14:35:42.691565 1596062 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-186820" ...
	I0929 14:35:42.691663 1596062 cli_runner.go:164] Run: docker start default-k8s-diff-port-186820
	I0929 14:35:42.980213 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:43.009181 1596062 kic.go:430] container "default-k8s-diff-port-186820" state is running.
	I0929 14:35:43.009618 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:43.038170 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:43.038413 1596062 machine.go:93] provisionDockerMachine start ...
	I0929 14:35:43.038482 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:43.061723 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:43.062111 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:43.062127 1596062 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:35:43.062747 1596062 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41000->127.0.0.1:34321: read: connection reset by peer
	I0929 14:35:46.204046 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.204073 1596062 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-186820"
	I0929 14:35:46.204141 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.222056 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.222389 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.222406 1596062 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-186820 && echo "default-k8s-diff-port-186820" | sudo tee /etc/hostname
	I0929 14:35:46.377247 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.377348 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.397114 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.397485 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.397509 1596062 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-186820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-186820/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-186820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:35:46.537135 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:46.537160 1596062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:35:46.537236 1596062 ubuntu.go:190] setting up certificates
	I0929 14:35:46.537245 1596062 provision.go:84] configureAuth start
	I0929 14:35:46.537316 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:46.558841 1596062 provision.go:143] copyHostCerts
	I0929 14:35:46.558910 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:35:46.558934 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:35:46.559026 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:35:46.559142 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:35:46.559154 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:35:46.559183 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:35:46.559251 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:35:46.559260 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:35:46.559289 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:35:46.559350 1596062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-186820 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-186820 localhost minikube]
	I0929 14:35:46.733893 1596062 provision.go:177] copyRemoteCerts
	I0929 14:35:46.733959 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:35:46.733998 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.755356 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:46.858489 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:35:46.883909 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 14:35:46.910465 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 14:35:46.942412 1596062 provision.go:87] duration metric: took 405.141346ms to configureAuth
	I0929 14:35:46.942438 1596062 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:35:46.942640 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:46.942699 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.959513 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.959825 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.959842 1596062 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:35:47.108999 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:35:47.109020 1596062 ubuntu.go:71] root file system type: overlay
	I0929 14:35:47.109131 1596062 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:35:47.109201 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.126915 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.127240 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.127365 1596062 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:35:47.281272 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:35:47.281364 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.299262 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.299576 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.299606 1596062 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:35:47.450591 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:47.450619 1596062 machine.go:96] duration metric: took 4.41218926s to provisionDockerMachine
	I0929 14:35:47.450630 1596062 start.go:293] postStartSetup for "default-k8s-diff-port-186820" (driver="docker")
	I0929 14:35:47.450641 1596062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:35:47.450716 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:35:47.450765 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.470252 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.570022 1596062 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:35:47.573521 1596062 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:35:47.573556 1596062 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:35:47.573567 1596062 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:35:47.573574 1596062 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:35:47.573585 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:35:47.573643 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:35:47.573731 1596062 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:35:47.573850 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:35:47.582484 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:47.607719 1596062 start.go:296] duration metric: took 157.074022ms for postStartSetup
	I0929 14:35:47.607821 1596062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:35:47.607869 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.624930 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.721416 1596062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:35:47.725935 1596062 fix.go:56] duration metric: took 5.054511148s for fixHost
	I0929 14:35:47.725957 1596062 start.go:83] releasing machines lock for "default-k8s-diff-port-186820", held for 5.054560232s
	I0929 14:35:47.726022 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:47.743658 1596062 ssh_runner.go:195] Run: cat /version.json
	I0929 14:35:47.743708 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.743985 1596062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:35:47.744046 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.767655 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.776135 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.868074 1596062 ssh_runner.go:195] Run: systemctl --version
	I0929 14:35:48.003111 1596062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:35:48.010051 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:35:48.037046 1596062 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:35:48.037127 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:35:48.046790 1596062 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 14:35:48.046821 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.046855 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.046959 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.064298 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:35:48.077373 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:35:48.087939 1596062 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.088011 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:35:48.099214 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.109800 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:35:48.119860 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.129709 1596062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:35:48.140034 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:35:48.151023 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:35:48.162212 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:35:48.173065 1596062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:35:48.182304 1596062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:35:48.191122 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.275156 1596062 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:35:48.388383 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.388435 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.388487 1596062 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:35:48.403898 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.417945 1596062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:35:48.450429 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.462890 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:35:48.476336 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.497267 1596062 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:35:48.501572 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:35:48.513810 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:35:48.548394 1596062 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:35:48.651762 1596062 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:35:48.744803 1596062 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.744903 1596062 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:35:48.765355 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:35:48.778732 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.873398 1596062 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:35:49.382274 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:35:49.394500 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:35:49.406617 1596062 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 14:35:49.420787 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.432705 1596062 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:35:49.525907 1596062 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:35:49.612769 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.715560 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:35:49.731642 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:35:49.743392 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.840499 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:35:49.933414 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.952842 1596062 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:35:49.952912 1596062 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:35:49.956643 1596062 start.go:563] Will wait 60s for crictl version
	I0929 14:35:49.956708 1596062 ssh_runner.go:195] Run: which crictl
	I0929 14:35:49.960634 1596062 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:35:50.005514 1596062 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:35:50.005607 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.035266 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.064977 1596062 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:35:50.065096 1596062 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-186820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:35:50.085518 1596062 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:35:50.090438 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.104259 1596062 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:35:50.104391 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:50.104452 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.126383 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.126409 1596062 docker.go:621] Images already preloaded, skipping extraction
	I0929 14:35:50.126472 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.146276 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.146318 1596062 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:35:50.146329 1596062 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 docker true true} ...
	I0929 14:35:50.146441 1596062 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-186820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:35:50.146513 1596062 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:35:50.200396 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:50.200426 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:50.200440 1596062 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:35:50.200460 1596062 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-186820 NodeName:default-k8s-diff-port-186820 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:35:50.200650 1596062 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-186820"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:35:50.200727 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:35:50.210044 1596062 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:35:50.210118 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:35:50.219378 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0929 14:35:50.237028 1596062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:35:50.255641 1596062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0929 14:35:50.274465 1596062 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:35:50.278275 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.289351 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:50.378347 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:50.393916 1596062 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820 for IP: 192.168.76.2
	I0929 14:35:50.393942 1596062 certs.go:194] generating shared ca certs ...
	I0929 14:35:50.393959 1596062 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:50.394101 1596062 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:35:50.394152 1596062 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:35:50.394164 1596062 certs.go:256] generating profile certs ...
	I0929 14:35:50.394266 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/client.key
	I0929 14:35:50.394344 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key.3abc893e
	I0929 14:35:50.394410 1596062 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key
	I0929 14:35:50.394524 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:35:50.394563 1596062 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:35:50.394576 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:35:50.394602 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:35:50.394627 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:35:50.394652 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:35:50.394699 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:50.395324 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:35:50.425482 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:35:50.458821 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:35:50.492420 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:35:50.551343 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 14:35:50.605319 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 14:35:50.639423 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:35:50.678207 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:35:50.718215 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:35:50.747191 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:35:50.779504 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:35:50.809480 1596062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:35:50.830273 1596062 ssh_runner.go:195] Run: openssl version
	I0929 14:35:50.836472 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:35:50.848203 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.851953 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.852017 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.859388 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:35:50.868867 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:35:50.878588 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882188 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882261 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.890114 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:35:50.899476 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:35:50.909249 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913394 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913486 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.921135 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:35:50.930563 1596062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:35:50.934410 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 14:35:50.941795 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 14:35:50.950427 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 14:35:50.960816 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 14:35:50.970602 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 14:35:50.977819 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 14:35:50.985284 1596062 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:50.985429 1596062 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:35:51.006801 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:35:51.025256 1596062 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 14:35:51.025334 1596062 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 14:35:51.025424 1596062 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 14:35:51.041400 1596062 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 14:35:51.042316 1596062 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-186820" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.042910 1596062 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1125775/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-186820" cluster setting kubeconfig missing "default-k8s-diff-port-186820" context setting]
	I0929 14:35:51.043713 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.045723 1596062 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 14:35:51.061546 1596062 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 14:35:51.061580 1596062 kubeadm.go:593] duration metric: took 36.227514ms to restartPrimaryControlPlane
	I0929 14:35:51.061589 1596062 kubeadm.go:394] duration metric: took 76.316349ms to StartCluster
	I0929 14:35:51.061606 1596062 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.061666 1596062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.063237 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.063476 1596062 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:35:51.063781 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:51.063837 1596062 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:35:51.063907 1596062 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.063922 1596062 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.063934 1596062 addons.go:247] addon storage-provisioner should already be in state true
	I0929 14:35:51.063956 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.064489 1596062 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.064568 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.064581 1596062 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-186820"
	I0929 14:35:51.064928 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.067934 1596062 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.067967 1596062 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.067974 1596062 addons.go:247] addon metrics-server should already be in state true
	I0929 14:35:51.068006 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.068449 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.069089 1596062 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.069110 1596062 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.069117 1596062 addons.go:247] addon dashboard should already be in state true
	I0929 14:35:51.069143 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.069590 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.076810 1596062 out.go:179] * Verifying Kubernetes components...
	I0929 14:35:51.091555 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:51.118136 1596062 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:35:51.125122 1596062 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.125149 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:35:51.125225 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.164326 1596062 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.164353 1596062 addons.go:247] addon default-storageclass should already be in state true
	I0929 14:35:51.164390 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.170550 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.184841 1596062 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 14:35:51.190867 1596062 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199347 1596062 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199401 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 14:35:51.205983 1596062 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 14:35:51.206084 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.202823 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.213345 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 14:35:51.213391 1596062 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 14:35:51.213484 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.230915 1596062 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.230936 1596062 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:35:51.230996 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.269958 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.296608 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.306953 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.321614 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:51.387857 1596062 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:51.488310 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.584676 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 14:35:51.584747 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 14:35:51.636648 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.656953 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 14:35:51.656977 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 14:35:51.769528 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 14:35:51.769551 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0929 14:35:51.776704 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.776767 1596062 retry.go:31] will retry after 176.889773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.799383 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 14:35:51.799417 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 14:35:51.919355 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 14:35:51.919384 1596062 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 14:35:51.953840 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.958674 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 14:35:51.958698 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 14:35:51.997497 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:51.997523 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 14:35:52.312165 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:52.398850 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 14:35:52.398879 1596062 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 14:35:52.469654 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469690 1596062 retry.go:31] will retry after 160.704677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 14:35:52.469763 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469777 1596062 retry.go:31] will retry after 381.313638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.566150 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 14:35:52.566178 1596062 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 14:35:52.631374 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:52.752298 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 14:35:52.752376 1596062 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0929 14:35:52.812288 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.812366 1596062 retry.go:31] will retry after 303.64621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.851712 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:52.884643 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 14:35:52.884713 1596062 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 14:35:53.087320 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:53.087401 1596062 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 14:35:53.116319 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:53.151041 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:56.942553 1596062 node_ready.go:49] node "default-k8s-diff-port-186820" is "Ready"
	I0929 14:35:56.942583 1596062 node_ready.go:38] duration metric: took 5.554681325s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:56.942602 1596062 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:35:56.942665 1596062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:35:57.186445 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.55502845s)
	I0929 14:35:59.647559 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.795763438s)
	I0929 14:35:59.694900 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.578497303s)
	I0929 14:35:59.694937 1596062 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-186820"
	I0929 14:35:59.695034 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.543910818s)
	I0929 14:35:59.695216 1596062 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.752538922s)
	I0929 14:35:59.695237 1596062 api_server.go:72] duration metric: took 8.631722688s to wait for apiserver process to appear ...
	I0929 14:35:59.695243 1596062 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:35:59.695260 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:35:59.698283 1596062 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-186820 addons enable metrics-server
	
	I0929 14:35:59.701228 1596062 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0929 14:35:59.704363 1596062 addons.go:514] duration metric: took 8.640511326s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0929 14:35:59.704573 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:35:59.704591 1596062 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:36:00.200300 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:36:00.235965 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 14:36:00.246294 1596062 api_server.go:141] control plane version: v1.34.0
	I0929 14:36:00.246322 1596062 api_server.go:131] duration metric: took 551.072592ms to wait for apiserver health ...
	I0929 14:36:00.246333 1596062 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:36:00.258786 1596062 system_pods.go:59] 8 kube-system pods found
	I0929 14:36:00.258905 1596062 system_pods.go:61] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.258925 1596062 system_pods.go:61] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.258935 1596062 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.258944 1596062 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.259016 1596062 system_pods.go:61] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.259074 1596062 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.259092 1596062 system_pods.go:61] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.259101 1596062 system_pods.go:61] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.259111 1596062 system_pods.go:74] duration metric: took 12.770585ms to wait for pod list to return data ...
	I0929 14:36:00.259168 1596062 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:36:00.267463 1596062 default_sa.go:45] found service account: "default"
	I0929 14:36:00.267489 1596062 default_sa.go:55] duration metric: took 8.313947ms for default service account to be created ...
	I0929 14:36:00.267500 1596062 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:36:00.275897 1596062 system_pods.go:86] 8 kube-system pods found
	I0929 14:36:00.276012 1596062 system_pods.go:89] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.276046 1596062 system_pods.go:89] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.276089 1596062 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.276122 1596062 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.276164 1596062 system_pods.go:89] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.276193 1596062 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.276220 1596062 system_pods.go:89] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.276263 1596062 system_pods.go:89] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.276302 1596062 system_pods.go:126] duration metric: took 8.789614ms to wait for k8s-apps to be running ...
	I0929 14:36:00.276347 1596062 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:36:00.276463 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:36:00.322130 1596062 system_svc.go:56] duration metric: took 45.77635ms WaitForService to wait for kubelet
	I0929 14:36:00.322171 1596062 kubeadm.go:578] duration metric: took 9.258650816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:36:00.322195 1596062 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:36:00.330255 1596062 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:36:00.330363 1596062 node_conditions.go:123] node cpu capacity is 2
	I0929 14:36:00.330378 1596062 node_conditions.go:105] duration metric: took 8.17742ms to run NodePressure ...
	I0929 14:36:00.330394 1596062 start.go:241] waiting for startup goroutines ...
	I0929 14:36:00.330402 1596062 start.go:246] waiting for cluster config update ...
	I0929 14:36:00.330414 1596062 start.go:255] writing updated cluster config ...
	I0929 14:36:00.330883 1596062 ssh_runner.go:195] Run: rm -f paused
	I0929 14:36:00.336791 1596062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:00.352867 1596062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:36:02.362537 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:04.859542 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:06.860829 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:09.359186 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:11.859196 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:14.358754 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:16.859093 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:19.358587 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:21.362560 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:23.858978 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:25.863368 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:27.868276 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:30.358700 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:32.358763 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	I0929 14:36:32.858935 1596062 pod_ready.go:94] pod "coredns-66bc5c9577-wb8jw" is "Ready"
	I0929 14:36:32.858962 1596062 pod_ready.go:86] duration metric: took 32.506066188s for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.862337 1596062 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.868713 1596062 pod_ready.go:94] pod "etcd-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.868746 1596062 pod_ready.go:86] duration metric: took 6.378054ms for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.871570 1596062 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.876378 1596062 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.876410 1596062 pod_ready.go:86] duration metric: took 4.809833ms for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.879056 1596062 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.057602 1596062 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:33.057631 1596062 pod_ready.go:86] duration metric: took 178.552151ms for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.256851 1596062 pod_ready.go:83] waiting for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.657271 1596062 pod_ready.go:94] pod "kube-proxy-xbpqv" is "Ready"
	I0929 14:36:33.657301 1596062 pod_ready.go:86] duration metric: took 400.41966ms for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.857548 1596062 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256475 1596062 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:34.256548 1596062 pod_ready.go:86] duration metric: took 398.968386ms for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256562 1596062 pod_ready.go:40] duration metric: took 33.919672235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:34.315168 1596062 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:36:34.318274 1596062 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-186820" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:35:15 embed-certs-641794 dockerd[904]: time="2025-09-29T14:35:15.847830741Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:35:24 embed-certs-641794 dockerd[904]: time="2025-09-29T14:35:24.073240219Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:35:24 embed-certs-641794 dockerd[904]: time="2025-09-29T14:35:24.271446282Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:35:24 embed-certs-641794 dockerd[904]: time="2025-09-29T14:35:24.271558669Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:35:24 embed-certs-641794 cri-dockerd[1218]: time="2025-09-29T14:35:24Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:36:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:41.853782880Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:36:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:41.853840957Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:36:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:41.858231459Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:36:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:41.858276005Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:36:44 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:44.885973702Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:36:44 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:44.980289952Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:36:54 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:54.079629218Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:36:54 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:54.270075403Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:36:54 embed-certs-641794 dockerd[904]: time="2025-09-29T14:36:54.270176910Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:36:54 embed-certs-641794 cri-dockerd[1218]: time="2025-09-29T14:36:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:39:24 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:24.845431945Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:39:24 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:24.845476737Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:39:24 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:24.848196777Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:39:24 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:24.848261213Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:39:38 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:38.877138614Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:39:38 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:38.958317187Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:39:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:41.053028152Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:39:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:41.247792180Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:39:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:41.248047666Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:39:41 embed-certs-641794 cri-dockerd[1218]: time="2025-09-29T14:39:41Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5e3d225d2637       ba04bb24b9575                                                                                         8 minutes ago       Running             storage-provisioner       2                   041045876cfb0       storage-provisioner
	a9885e4fb4ba1       138784d87c9c5                                                                                         9 minutes ago       Running             coredns                   1                   cff4b2a992f48       coredns-66bc5c9577-hmpmx
	4aa49530e7852       1611cd07b61d5                                                                                         9 minutes ago       Running             busybox                   1                   92ed894df452f       busybox
	25d990cb20b46       6fc32d66c1411                                                                                         9 minutes ago       Running             kube-proxy                1                   40a9a09bfa7d0       kube-proxy-hq49j
	00b043c910b1a       ba04bb24b9575                                                                                         9 minutes ago       Exited              storage-provisioner       1                   041045876cfb0       storage-provisioner
	baae0522a93f6       a25f5ef9c34c3                                                                                         9 minutes ago       Running             kube-scheduler            1                   a04885a4a6910       kube-scheduler-embed-certs-641794
	2185abcc460ea       d291939e99406                                                                                         9 minutes ago       Running             kube-apiserver            1                   cbcd0c87d851f       kube-apiserver-embed-certs-641794
	1c2e294415724       a1894772a478e                                                                                         9 minutes ago       Running             etcd                      1                   4f218b0a654d2       etcd-embed-certs-641794
	e50bb88c331ce       996be7e86d9b3                                                                                         9 minutes ago       Running             kube-controller-manager   1                   f0f1ad1622dd8       kube-controller-manager-embed-certs-641794
	a7b027d5e346a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              busybox                   0                   89cf68d9c09e9       busybox
	391289f0044ce       138784d87c9c5                                                                                         10 minutes ago      Exited              coredns                   0                   abee1b5aa3928       coredns-66bc5c9577-hmpmx
	9383fb7d79769       6fc32d66c1411                                                                                         10 minutes ago      Exited              kube-proxy                0                   385ffe7bbb81c       kube-proxy-hq49j
	5296549dc4c62       a25f5ef9c34c3                                                                                         11 minutes ago      Exited              kube-scheduler            0                   1dcc63798c9bc       kube-scheduler-embed-certs-641794
	b429eaa7a04a7       996be7e86d9b3                                                                                         11 minutes ago      Exited              kube-controller-manager   0                   a1b6ef32508ff       kube-controller-manager-embed-certs-641794
	cb04aa5bbfcb3       d291939e99406                                                                                         11 minutes ago      Exited              kube-apiserver            0                   4dd363fb0319e       kube-apiserver-embed-certs-641794
	ba24bc9023aca       a1894772a478e                                                                                         11 minutes ago      Exited              etcd                      0                   a5913658e6bcd       etcd-embed-certs-641794
	
	
	==> coredns [391289f0044c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	[INFO] Reloading complete
	[INFO] 127.0.0.1:49821 - 4211 "HINFO IN 2052675392540096316.4631226984732371511. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011670386s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9885e4fb4ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50006 - 64683 "HINFO IN 6825412484363477214.2882775065589529508. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065637833s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-641794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-641794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=embed-certs-641794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_32_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:32:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-641794
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:43:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:40:16 +0000   Mon, 29 Sep 2025 14:32:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:40:16 +0000   Mon, 29 Sep 2025 14:32:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:40:16 +0000   Mon, 29 Sep 2025 14:32:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:40:16 +0000   Mon, 29 Sep 2025 14:32:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-641794
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b44f1d4bdf4527aca15effb6a3cb47
	  System UUID:                15b26991-5060-468d-89e2-2473f52c87e3
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-hmpmx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     11m
	  kube-system                 etcd-embed-certs-641794                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         11m
	  kube-system                 kube-apiserver-embed-certs-641794             250m (12%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-embed-certs-641794    200m (10%)    0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-hq49j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-embed-certs-641794             100m (5%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 metrics-server-746fcd58dc-rns62               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-stm84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-47mqf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m37s                  kube-proxy       
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  11m (x8 over 11m)      kubelet          Node embed-certs-641794 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x8 over 11m)      kubelet          Node embed-certs-641794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m (x7 over 11m)      kubelet          Node embed-certs-641794 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  11m                    kubelet          Node embed-certs-641794 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 11m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  11m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    11m                    kubelet          Node embed-certs-641794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     11m                    kubelet          Node embed-certs-641794 status is now: NodeHasSufficientPID
	  Normal   Starting                 11m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           11m                    node-controller  Node embed-certs-641794 event: Registered Node embed-certs-641794 in Controller
	  Normal   Starting                 9m52s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m52s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m52s (x8 over 9m52s)  kubelet          Node embed-certs-641794 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m52s (x8 over 9m52s)  kubelet          Node embed-certs-641794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m52s (x7 over 9m52s)  kubelet          Node embed-certs-641794 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m38s                  node-controller  Node embed-certs-641794 event: Registered Node embed-certs-641794 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [1c2e29441572] <==
	{"level":"warn","ts":"2025-09-29T14:33:45.412331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.465748Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.516162Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.533775Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.568896Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.624758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.642091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.676045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.701700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.746335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.776602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.816594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.854660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.871859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.915370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.943858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.962838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.036593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.056302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.079273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.166188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.180625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.214021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.252782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.363087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	
	
	==> etcd [ba24bc9023ac] <==
	{"level":"warn","ts":"2025-09-29T14:32:20.881274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.898124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.911971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.940347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.954832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.970615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:21.045343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:33:17.103574Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:33:17.103630Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-641794","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-09-29T14:33:17.103728Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:33:24.116116Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:33:24.118597Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:33:24.118720Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-09-29T14:33:24.118935Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T14:33:24.118990Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T14:33:24.119923Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:33:24.119983Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:33:24.119993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T14:33:24.120261Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:33:24.120288Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:33:24.120296Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:33:24.123406Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-09-29T14:33:24.123689Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:33:24.123823Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:33:24.123911Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-641794","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:43:29 up  6:26,  0 users,  load average: 0.51, 0.94, 1.76
	Linux embed-certs-641794 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2185abcc460e] <==
	I0929 14:39:43.714195       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:39:49.288558       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:39:49.288606       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:39:49.288618       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:39:49.289617       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:39:49.289791       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:39:49.289813       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:40:35.884410       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:41:06.728593       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:41:49.289063       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:41:49.289122       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:41:49.289302       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:41:49.290190       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:41:49.290358       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:41:49.290377       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:42:04.489387       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:42:27.456283       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:43:21.015320       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:43:28.864374       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [cb04aa5bbfcb] <==
	W0929 14:33:26.687035       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.709711       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.746514       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.772916       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.794840       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.811794       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.814269       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.823237       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.828728       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.841665       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.848291       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.877451       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.884451       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.931306       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.936829       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.989159       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.000807       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.029478       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.034002       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.034443       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.057423       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.061052       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.090381       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.182651       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.244168       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b429eaa7a04a] <==
	I0929 14:32:28.789735       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 14:32:28.798426       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 14:32:28.798473       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:32:28.798729       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 14:32:28.798745       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 14:32:28.799222       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 14:32:28.799294       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 14:32:28.799367       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-641794"
	I0929 14:32:28.799406       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 14:32:28.800012       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 14:32:28.800171       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 14:32:28.800397       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 14:32:28.800560       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 14:32:28.800688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 14:32:28.800776       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 14:32:28.801197       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 14:32:28.801346       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 14:32:28.803391       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 14:32:28.808620       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 14:32:28.813275       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 14:32:28.813286       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 14:32:28.820941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:32:28.851177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 14:32:28.852409       1 shared_informer.go:356] "Caches are synced" controller="service account"
	E0929 14:33:16.312316       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e50bb88c331c] <==
	I0929 14:37:21.689241       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:37:51.493469       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:37:51.697161       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:38:21.498493       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:38:21.704484       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:38:51.503756       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:38:51.712908       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:39:21.508798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:39:21.721055       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:39:51.513557       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:39:51.730109       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:40:21.517922       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:40:21.737827       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:40:51.522781       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:40:51.746271       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:41:21.526882       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:41:21.754409       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:41:51.531613       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:41:51.766249       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:42:21.536180       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:42:21.773637       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:42:51.540749       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:42:51.780959       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:43:21.544633       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:43:21.789031       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [25d990cb20b4] <==
	I0929 14:33:51.582317       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:33:51.835311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:33:51.937436       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:33:51.937477       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 14:33:51.937552       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:33:52.021008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:33:52.021075       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:33:52.030164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:33:52.030935       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:33:52.030954       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:33:52.033087       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:33:52.033102       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:33:52.033506       1 config.go:200] "Starting service config controller"
	I0929 14:33:52.033514       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:33:52.038293       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:33:52.038315       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:33:52.054416       1 config.go:309] "Starting node config controller"
	I0929 14:33:52.054436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:33:52.054443       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:33:52.134550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:33:52.134610       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 14:33:52.140572       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [9383fb7d7976] <==
	I0929 14:32:30.991256       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:32:31.098266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:32:31.199388       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:32:31.199424       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 14:32:31.199486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:32:31.244428       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:32:31.244520       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:32:31.254959       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:32:31.255262       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:32:31.255296       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:32:31.256408       1 config.go:200] "Starting service config controller"
	I0929 14:32:31.256427       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:32:31.256855       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:32:31.256871       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:32:31.256891       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:32:31.256895       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:32:31.260983       1 config.go:309] "Starting node config controller"
	I0929 14:32:31.260997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:32:31.261005       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:32:31.356607       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:32:31.365283       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 14:32:31.365551       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5296549dc4c6] <==
	E0929 14:32:21.891748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:32:21.892196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:32:22.691094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:32:22.693972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 14:32:22.739274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 14:32:22.861406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:32:22.873234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 14:32:22.912269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:32:22.915835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:32:22.921217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:32:22.926192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:32:22.934099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 14:32:22.965852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 14:32:23.008129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:32:23.008606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:32:23.053605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:32:23.079116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:32:23.181216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0929 14:32:26.047018       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:17.262464       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 14:33:17.262495       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 14:33:17.262515       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 14:33:17.262539       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:17.262752       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 14:33:17.262767       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [baae0522a93f] <==
	I0929 14:33:48.184595       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:33:48.214437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:48.214488       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:48.226478       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 14:33:48.226607       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 14:33:48.262146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:33:48.262521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 14:33:48.262565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:33:48.262604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:33:48.262642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 14:33:48.262703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 14:33:48.262798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:33:48.262838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:33:48.262873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:33:48.262904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:33:48.274659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 14:33:48.274817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 14:33:48.274867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 14:33:48.274919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:33:48.274953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:33:48.285793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 14:33:48.285891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 14:33:48.286015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:33:48.286086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 14:33:49.715037       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 14:41:35 embed-certs-641794 kubelet[1405]: E0929 14:41:35.834735    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:41:46 embed-certs-641794 kubelet[1405]: E0929 14:41:46.834917    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:41:47 embed-certs-641794 kubelet[1405]: E0929 14:41:47.837026    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:41:48 embed-certs-641794 kubelet[1405]: E0929 14:41:48.833802    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:41:57 embed-certs-641794 kubelet[1405]: E0929 14:41:57.852308    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:42:00 embed-certs-641794 kubelet[1405]: E0929 14:42:00.834035    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:42:02 embed-certs-641794 kubelet[1405]: E0929 14:42:02.834302    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:42:10 embed-certs-641794 kubelet[1405]: E0929 14:42:10.835103    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:42:13 embed-certs-641794 kubelet[1405]: E0929 14:42:13.835501    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:42:17 embed-certs-641794 kubelet[1405]: E0929 14:42:17.835593    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:42:25 embed-certs-641794 kubelet[1405]: E0929 14:42:25.849770    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:42:27 embed-certs-641794 kubelet[1405]: E0929 14:42:27.837254    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:42:32 embed-certs-641794 kubelet[1405]: E0929 14:42:32.834364    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:42:40 embed-certs-641794 kubelet[1405]: E0929 14:42:40.834945    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:42:42 embed-certs-641794 kubelet[1405]: E0929 14:42:42.833976    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:42:43 embed-certs-641794 kubelet[1405]: E0929 14:42:43.835831    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:42:53 embed-certs-641794 kubelet[1405]: E0929 14:42:53.835199    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:42:54 embed-certs-641794 kubelet[1405]: E0929 14:42:54.834347    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:42:56 embed-certs-641794 kubelet[1405]: E0929 14:42:56.834322    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:43:07 embed-certs-641794 kubelet[1405]: E0929 14:43:07.835221    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:43:09 embed-certs-641794 kubelet[1405]: E0929 14:43:09.835245    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:43:10 embed-certs-641794 kubelet[1405]: E0929 14:43:10.834121    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:43:19 embed-certs-641794 kubelet[1405]: E0929 14:43:19.837100    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:43:22 embed-certs-641794 kubelet[1405]: E0929 14:43:22.834852    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:43:22 embed-certs-641794 kubelet[1405]: E0929 14:43:22.835200    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	
	
	==> storage-provisioner [00b043c910b1] <==
	I0929 14:33:51.094730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:34:21.101217       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a5e3d225d263] <==
	W0929 14:43:04.171505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:06.174711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:06.179355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:08.182532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:08.187071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:10.190368       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:10.198769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:12.202648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:12.207466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:14.210977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:14.218350       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:16.222004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:16.226715       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:18.229751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:18.236614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:20.239968       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:20.244919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:22.249311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:22.254372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:24.257776       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:24.262245       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:26.265436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:26.272066       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:28.275709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:43:28.282031       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-641794 -n embed-certs-641794
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-641794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-641794 describe pod metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-641794 describe pod metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf: exit status 1 (89.972481ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-rns62" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-stm84" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-47mqf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-641794 describe pod metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.87s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tdxbq" [4e2ddb81-1cba-47a1-897a-4f8a7912d3f3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 14:36:53.257448 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:37:00.341646 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:37:03.456440 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:37:47.162241 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:37:50.245331 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:38:14.863301 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:38:23.641253 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:38:47.929537 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:38:59.882760 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:39:02.358883 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:39:15.629780 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:40:01.311945 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:40:03.409342 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:40:03.685037 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:40:20.567071 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:40:38.296394 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:40:52.915882 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:41:53.257202 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:42:00.341170 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:42:03.456587 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:42:47.162610 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:42:50.245349 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:43:06.749413 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:45:35.09157697 +0000 UTC m=+6228.356790332
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 describe po kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) kubectl --context default-k8s-diff-port-186820 describe po kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-tdxbq
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-186820/192.168.76.2
Start Time:       Mon, 29 Sep 2025 14:36:02 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9ghl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-c9ghl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  9m33s                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq to default-k8s-diff-port-186820
Normal   Pulling    6m30s (x5 over 9m32s)   kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     6m29s (x5 over 9m32s)   kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     6m29s (x5 over 9m32s)   kubelet            Error: ErrImagePull
Warning  Failed     4m30s (x20 over 9m31s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m18s (x21 over 9m31s)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
start_stop_delete_test.go:272: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 logs kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard
start_stop_delete_test.go:272: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-186820 logs kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard: exit status 1 (106.111366ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-tdxbq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:272: kubectl --context default-k8s-diff-port-186820 logs kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-186820
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-186820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d",
	        "Created": "2025-09-29T14:34:38.00395341Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1596191,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:35:42.727283628Z",
	            "FinishedAt": "2025-09-29T14:35:41.897203495Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/hosts",
	        "LogPath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d-json.log",
	        "Name": "/default-k8s-diff-port-186820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-186820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-186820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d",
	                "LowerDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-186820",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-186820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-186820",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-186820",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-186820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "783abf3c4db843da08eb3592c5440792ba1a7ca1ddfc77f6acf07cb7d036e206",
	            "SandboxKey": "/var/run/docker/netns/783abf3c4db8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34321"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34322"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34325"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34323"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34324"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-186820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:db:0e:d2:37:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "07a99473690a202625d605e5721cbda950adaf6af5f172bb7ac62453a5d36cb4",
	                    "EndpointID": "22bd89631aa1f5b98ca530e1e2e5eca83158fbece80d6a04776953df6ca474b7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-186820",
	                        "53a9ac8f8b5f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-186820 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-186820 logs -n 25: (1.300013826s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ image   │ no-preload-983174 image list --format=json                                                                                                                                                                                                      │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ pause   │ -p no-preload-983174 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ unpause │ -p no-preload-983174 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p no-preload-983174                                                                                                                                                                                                                            │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p no-preload-983174                                                                                                                                                                                                                            │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p disable-driver-mounts-627946                                                                                                                                                                                                                 │ disable-driver-mounts-627946 │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-641794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p embed-certs-641794 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-641794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-093064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p newest-cni-093064 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-093064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ image   │ newest-cni-093064 image list --format=json                                                                                                                                                                                                      │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ pause   │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ unpause │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-186820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ stop    │ -p default-k8s-diff-port-186820 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-186820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:35:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:35:42.456122 1596062 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:35:42.456362 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456395 1596062 out.go:374] Setting ErrFile to fd 2...
	I0929 14:35:42.456415 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456738 1596062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:35:42.457163 1596062 out.go:368] Setting JSON to false
	I0929 14:35:42.458288 1596062 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22695,"bootTime":1759133848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:35:42.458402 1596062 start.go:140] virtualization:  
	I0929 14:35:42.462007 1596062 out.go:179] * [default-k8s-diff-port-186820] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:35:42.465793 1596062 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:35:42.465926 1596062 notify.go:220] Checking for updates...
	I0929 14:35:42.471729 1596062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:35:42.474683 1596062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:42.477543 1596062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:35:42.480431 1596062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:35:42.483237 1596062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:35:42.486711 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:42.487301 1596062 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:35:42.514877 1596062 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:35:42.515008 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.572860 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.562452461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.572973 1596062 docker.go:318] overlay module found
	I0929 14:35:42.576085 1596062 out.go:179] * Using the docker driver based on existing profile
	I0929 14:35:42.578939 1596062 start.go:304] selected driver: docker
	I0929 14:35:42.578961 1596062 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.579120 1596062 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:35:42.579853 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.635895 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.626575461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.636238 1596062 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:35:42.636278 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:42.636347 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:42.636386 1596062 start.go:348] cluster config:
	{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.641605 1596062 out.go:179] * Starting "default-k8s-diff-port-186820" primary control-plane node in "default-k8s-diff-port-186820" cluster
	I0929 14:35:42.645130 1596062 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:35:42.648466 1596062 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:35:42.651441 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:42.651462 1596062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:35:42.651506 1596062 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 14:35:42.651523 1596062 cache.go:58] Caching tarball of preloaded images
	I0929 14:35:42.651603 1596062 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 14:35:42.651613 1596062 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 14:35:42.651737 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:42.671234 1596062 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:35:42.671260 1596062 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:35:42.671281 1596062 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:35:42.671312 1596062 start.go:360] acquireMachinesLock for default-k8s-diff-port-186820: {Name:mk14ee05a72e1bc87d0193bcc4d30163df297691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:35:42.671385 1596062 start.go:364] duration metric: took 48.354µs to acquireMachinesLock for "default-k8s-diff-port-186820"
	I0929 14:35:42.671408 1596062 start.go:96] Skipping create...Using existing machine configuration
	I0929 14:35:42.671416 1596062 fix.go:54] fixHost starting: 
	I0929 14:35:42.671679 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:42.688259 1596062 fix.go:112] recreateIfNeeded on default-k8s-diff-port-186820: state=Stopped err=<nil>
	W0929 14:35:42.688293 1596062 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 14:35:42.691565 1596062 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-186820" ...
	I0929 14:35:42.691663 1596062 cli_runner.go:164] Run: docker start default-k8s-diff-port-186820
	I0929 14:35:42.980213 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:43.009181 1596062 kic.go:430] container "default-k8s-diff-port-186820" state is running.
	I0929 14:35:43.009618 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:43.038170 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:43.038413 1596062 machine.go:93] provisionDockerMachine start ...
	I0929 14:35:43.038482 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:43.061723 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:43.062111 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:43.062127 1596062 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:35:43.062747 1596062 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41000->127.0.0.1:34321: read: connection reset by peer
	I0929 14:35:46.204046 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.204073 1596062 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-186820"
	I0929 14:35:46.204141 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.222056 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.222389 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.222406 1596062 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-186820 && echo "default-k8s-diff-port-186820" | sudo tee /etc/hostname
	I0929 14:35:46.377247 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.377348 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.397114 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.397485 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.397509 1596062 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-186820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-186820/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-186820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:35:46.537135 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:46.537160 1596062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:35:46.537236 1596062 ubuntu.go:190] setting up certificates
	I0929 14:35:46.537245 1596062 provision.go:84] configureAuth start
	I0929 14:35:46.537316 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:46.558841 1596062 provision.go:143] copyHostCerts
	I0929 14:35:46.558910 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:35:46.558934 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:35:46.559026 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:35:46.559142 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:35:46.559154 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:35:46.559183 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:35:46.559251 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:35:46.559260 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:35:46.559289 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:35:46.559350 1596062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-186820 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-186820 localhost minikube]
	I0929 14:35:46.733893 1596062 provision.go:177] copyRemoteCerts
	I0929 14:35:46.733959 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:35:46.733998 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.755356 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:46.858489 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:35:46.883909 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 14:35:46.910465 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 14:35:46.942412 1596062 provision.go:87] duration metric: took 405.141346ms to configureAuth
	I0929 14:35:46.942438 1596062 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:35:46.942640 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:46.942699 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.959513 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.959825 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.959842 1596062 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:35:47.108999 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:35:47.109020 1596062 ubuntu.go:71] root file system type: overlay
	I0929 14:35:47.109131 1596062 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:35:47.109201 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.126915 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.127240 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.127365 1596062 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:35:47.281272 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:35:47.281364 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.299262 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.299576 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.299606 1596062 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:35:47.450591 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:47.450619 1596062 machine.go:96] duration metric: took 4.41218926s to provisionDockerMachine
	I0929 14:35:47.450630 1596062 start.go:293] postStartSetup for "default-k8s-diff-port-186820" (driver="docker")
	I0929 14:35:47.450641 1596062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:35:47.450716 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:35:47.450765 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.470252 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.570022 1596062 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:35:47.573521 1596062 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:35:47.573556 1596062 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:35:47.573567 1596062 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:35:47.573574 1596062 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:35:47.573585 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:35:47.573643 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:35:47.573731 1596062 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:35:47.573850 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:35:47.582484 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:47.607719 1596062 start.go:296] duration metric: took 157.074022ms for postStartSetup
	I0929 14:35:47.607821 1596062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:35:47.607869 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.624930 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.721416 1596062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:35:47.725935 1596062 fix.go:56] duration metric: took 5.054511148s for fixHost
	I0929 14:35:47.725957 1596062 start.go:83] releasing machines lock for "default-k8s-diff-port-186820", held for 5.054560232s
	I0929 14:35:47.726022 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:47.743658 1596062 ssh_runner.go:195] Run: cat /version.json
	I0929 14:35:47.743708 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.743985 1596062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:35:47.744046 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.767655 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.776135 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.868074 1596062 ssh_runner.go:195] Run: systemctl --version
	I0929 14:35:48.003111 1596062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:35:48.010051 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:35:48.037046 1596062 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:35:48.037127 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:35:48.046790 1596062 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 14:35:48.046821 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.046855 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.046959 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.064298 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:35:48.077373 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:35:48.087939 1596062 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.088011 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:35:48.099214 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.109800 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:35:48.119860 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.129709 1596062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:35:48.140034 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:35:48.151023 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:35:48.162212 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:35:48.173065 1596062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:35:48.182304 1596062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:35:48.191122 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.275156 1596062 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:35:48.388383 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.388435 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.388487 1596062 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:35:48.403898 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.417945 1596062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:35:48.450429 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.462890 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:35:48.476336 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.497267 1596062 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:35:48.501572 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:35:48.513810 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:35:48.548394 1596062 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:35:48.651762 1596062 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:35:48.744803 1596062 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.744903 1596062 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:35:48.765355 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:35:48.778732 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.873398 1596062 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:35:49.382274 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:35:49.394500 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:35:49.406617 1596062 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 14:35:49.420787 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.432705 1596062 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:35:49.525907 1596062 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:35:49.612769 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.715560 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:35:49.731642 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:35:49.743392 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.840499 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:35:49.933414 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.952842 1596062 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:35:49.952912 1596062 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:35:49.956643 1596062 start.go:563] Will wait 60s for crictl version
	I0929 14:35:49.956708 1596062 ssh_runner.go:195] Run: which crictl
	I0929 14:35:49.960634 1596062 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:35:50.005514 1596062 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:35:50.005607 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.035266 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.064977 1596062 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:35:50.065096 1596062 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-186820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:35:50.085518 1596062 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:35:50.090438 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.104259 1596062 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:35:50.104391 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:50.104452 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.126383 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.126409 1596062 docker.go:621] Images already preloaded, skipping extraction
	I0929 14:35:50.126472 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.146276 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.146318 1596062 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:35:50.146329 1596062 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 docker true true} ...
	I0929 14:35:50.146441 1596062 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-186820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:35:50.146513 1596062 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:35:50.200396 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:50.200426 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:50.200440 1596062 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:35:50.200460 1596062 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-186820 NodeName:default-k8s-diff-port-186820 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:35:50.200650 1596062 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-186820"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:35:50.200727 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:35:50.210044 1596062 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:35:50.210118 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:35:50.219378 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0929 14:35:50.237028 1596062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:35:50.255641 1596062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0929 14:35:50.274465 1596062 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:35:50.278275 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.289351 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:50.378347 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:50.393916 1596062 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820 for IP: 192.168.76.2
	I0929 14:35:50.393942 1596062 certs.go:194] generating shared ca certs ...
	I0929 14:35:50.393959 1596062 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:50.394101 1596062 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:35:50.394152 1596062 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:35:50.394164 1596062 certs.go:256] generating profile certs ...
	I0929 14:35:50.394266 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/client.key
	I0929 14:35:50.394344 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key.3abc893e
	I0929 14:35:50.394410 1596062 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key
	I0929 14:35:50.394524 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:35:50.394563 1596062 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:35:50.394576 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:35:50.394602 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:35:50.394627 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:35:50.394652 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:35:50.394699 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:50.395324 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:35:50.425482 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:35:50.458821 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:35:50.492420 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:35:50.551343 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 14:35:50.605319 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 14:35:50.639423 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:35:50.678207 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:35:50.718215 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:35:50.747191 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:35:50.779504 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:35:50.809480 1596062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:35:50.830273 1596062 ssh_runner.go:195] Run: openssl version
	I0929 14:35:50.836472 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:35:50.848203 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.851953 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.852017 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.859388 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:35:50.868867 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:35:50.878588 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882188 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882261 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.890114 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:35:50.899476 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:35:50.909249 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913394 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913486 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.921135 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:35:50.930563 1596062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:35:50.934410 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 14:35:50.941795 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 14:35:50.950427 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 14:35:50.960816 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 14:35:50.970602 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 14:35:50.977819 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 14:35:50.985284 1596062 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:50.985429 1596062 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:35:51.006801 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:35:51.025256 1596062 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 14:35:51.025334 1596062 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 14:35:51.025424 1596062 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 14:35:51.041400 1596062 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 14:35:51.042316 1596062 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-186820" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.042910 1596062 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1125775/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-186820" cluster setting kubeconfig missing "default-k8s-diff-port-186820" context setting]
	I0929 14:35:51.043713 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.045723 1596062 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 14:35:51.061546 1596062 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 14:35:51.061580 1596062 kubeadm.go:593] duration metric: took 36.227514ms to restartPrimaryControlPlane
	I0929 14:35:51.061589 1596062 kubeadm.go:394] duration metric: took 76.316349ms to StartCluster
	I0929 14:35:51.061606 1596062 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.061666 1596062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.063237 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.063476 1596062 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:35:51.063781 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:51.063837 1596062 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:35:51.063907 1596062 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.063922 1596062 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.063934 1596062 addons.go:247] addon storage-provisioner should already be in state true
	I0929 14:35:51.063956 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.064489 1596062 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.064568 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.064581 1596062 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-186820"
	I0929 14:35:51.064928 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.067934 1596062 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.067967 1596062 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.067974 1596062 addons.go:247] addon metrics-server should already be in state true
	I0929 14:35:51.068006 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.068449 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.069089 1596062 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.069110 1596062 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.069117 1596062 addons.go:247] addon dashboard should already be in state true
	I0929 14:35:51.069143 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.069590 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.076810 1596062 out.go:179] * Verifying Kubernetes components...
	I0929 14:35:51.091555 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:51.118136 1596062 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:35:51.125122 1596062 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.125149 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:35:51.125225 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.164326 1596062 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.164353 1596062 addons.go:247] addon default-storageclass should already be in state true
	I0929 14:35:51.164390 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.170550 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.184841 1596062 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 14:35:51.190867 1596062 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199347 1596062 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199401 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 14:35:51.205983 1596062 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 14:35:51.206084 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.202823 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.213345 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 14:35:51.213391 1596062 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 14:35:51.213484 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.230915 1596062 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.230936 1596062 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:35:51.230996 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.269958 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.296608 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.306953 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.321614 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:51.387857 1596062 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:51.488310 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.584676 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 14:35:51.584747 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 14:35:51.636648 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.656953 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 14:35:51.656977 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 14:35:51.769528 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 14:35:51.769551 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0929 14:35:51.776704 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.776767 1596062 retry.go:31] will retry after 176.889773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.799383 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 14:35:51.799417 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 14:35:51.919355 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 14:35:51.919384 1596062 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 14:35:51.953840 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.958674 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 14:35:51.958698 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 14:35:51.997497 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:51.997523 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 14:35:52.312165 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:52.398850 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 14:35:52.398879 1596062 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 14:35:52.469654 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469690 1596062 retry.go:31] will retry after 160.704677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 14:35:52.469763 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469777 1596062 retry.go:31] will retry after 381.313638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.566150 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 14:35:52.566178 1596062 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 14:35:52.631374 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:52.752298 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 14:35:52.752376 1596062 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0929 14:35:52.812288 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.812366 1596062 retry.go:31] will retry after 303.64621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.851712 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:52.884643 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 14:35:52.884713 1596062 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 14:35:53.087320 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:53.087401 1596062 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 14:35:53.116319 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:53.151041 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:56.942553 1596062 node_ready.go:49] node "default-k8s-diff-port-186820" is "Ready"
	I0929 14:35:56.942583 1596062 node_ready.go:38] duration metric: took 5.554681325s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:56.942602 1596062 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:35:56.942665 1596062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:35:57.186445 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.55502845s)
	I0929 14:35:59.647559 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.795763438s)
	I0929 14:35:59.694900 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.578497303s)
	I0929 14:35:59.694937 1596062 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-186820"
	I0929 14:35:59.695034 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.543910818s)
	I0929 14:35:59.695216 1596062 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.752538922s)
	I0929 14:35:59.695237 1596062 api_server.go:72] duration metric: took 8.631722688s to wait for apiserver process to appear ...
	I0929 14:35:59.695243 1596062 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:35:59.695260 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:35:59.698283 1596062 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-186820 addons enable metrics-server
	
	I0929 14:35:59.701228 1596062 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0929 14:35:59.704363 1596062 addons.go:514] duration metric: took 8.640511326s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0929 14:35:59.704573 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:35:59.704591 1596062 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:36:00.200300 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:36:00.235965 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 14:36:00.246294 1596062 api_server.go:141] control plane version: v1.34.0
	I0929 14:36:00.246322 1596062 api_server.go:131] duration metric: took 551.072592ms to wait for apiserver health ...
	I0929 14:36:00.246333 1596062 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:36:00.258786 1596062 system_pods.go:59] 8 kube-system pods found
	I0929 14:36:00.258905 1596062 system_pods.go:61] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.258925 1596062 system_pods.go:61] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.258935 1596062 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.258944 1596062 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.259016 1596062 system_pods.go:61] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.259074 1596062 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.259092 1596062 system_pods.go:61] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.259101 1596062 system_pods.go:61] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.259111 1596062 system_pods.go:74] duration metric: took 12.770585ms to wait for pod list to return data ...
	I0929 14:36:00.259168 1596062 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:36:00.267463 1596062 default_sa.go:45] found service account: "default"
	I0929 14:36:00.267489 1596062 default_sa.go:55] duration metric: took 8.313947ms for default service account to be created ...
	I0929 14:36:00.267500 1596062 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:36:00.275897 1596062 system_pods.go:86] 8 kube-system pods found
	I0929 14:36:00.276012 1596062 system_pods.go:89] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.276046 1596062 system_pods.go:89] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.276089 1596062 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.276122 1596062 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.276164 1596062 system_pods.go:89] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.276193 1596062 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.276220 1596062 system_pods.go:89] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.276263 1596062 system_pods.go:89] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.276302 1596062 system_pods.go:126] duration metric: took 8.789614ms to wait for k8s-apps to be running ...
	I0929 14:36:00.276347 1596062 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:36:00.276463 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:36:00.322130 1596062 system_svc.go:56] duration metric: took 45.77635ms WaitForService to wait for kubelet
	I0929 14:36:00.322171 1596062 kubeadm.go:578] duration metric: took 9.258650816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:36:00.322195 1596062 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:36:00.330255 1596062 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:36:00.330363 1596062 node_conditions.go:123] node cpu capacity is 2
	I0929 14:36:00.330378 1596062 node_conditions.go:105] duration metric: took 8.17742ms to run NodePressure ...
	I0929 14:36:00.330394 1596062 start.go:241] waiting for startup goroutines ...
	I0929 14:36:00.330402 1596062 start.go:246] waiting for cluster config update ...
	I0929 14:36:00.330414 1596062 start.go:255] writing updated cluster config ...
	I0929 14:36:00.330883 1596062 ssh_runner.go:195] Run: rm -f paused
	I0929 14:36:00.336791 1596062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:00.352867 1596062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:36:02.362537 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:04.859542 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:06.860829 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:09.359186 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:11.859196 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:14.358754 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:16.859093 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:19.358587 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:21.362560 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:23.858978 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:25.863368 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:27.868276 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:30.358700 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:32.358763 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	I0929 14:36:32.858935 1596062 pod_ready.go:94] pod "coredns-66bc5c9577-wb8jw" is "Ready"
	I0929 14:36:32.858962 1596062 pod_ready.go:86] duration metric: took 32.506066188s for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.862337 1596062 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.868713 1596062 pod_ready.go:94] pod "etcd-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.868746 1596062 pod_ready.go:86] duration metric: took 6.378054ms for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.871570 1596062 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.876378 1596062 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.876410 1596062 pod_ready.go:86] duration metric: took 4.809833ms for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.879056 1596062 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.057602 1596062 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:33.057631 1596062 pod_ready.go:86] duration metric: took 178.552151ms for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.256851 1596062 pod_ready.go:83] waiting for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.657271 1596062 pod_ready.go:94] pod "kube-proxy-xbpqv" is "Ready"
	I0929 14:36:33.657301 1596062 pod_ready.go:86] duration metric: took 400.41966ms for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.857548 1596062 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256475 1596062 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:34.256548 1596062 pod_ready.go:86] duration metric: took 398.968386ms for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256562 1596062 pod_ready.go:40] duration metric: took 33.919672235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:34.315168 1596062 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:36:34.318274 1596062 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-186820" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:37:35 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:37:35.044763912Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:37:35 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:37:35.044864081Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:37:35 default-k8s-diff-port-186820 cri-dockerd[1213]: time="2025-09-29T14:37:35Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:37:35 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:37:35.661759672Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:37:35 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:37:35.753908857Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:38:57 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:38:57.620695754Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:38:57 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:38:57.620735853Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:38:57 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:38:57.623799620Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:38:57 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:38:57.623840072Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:39:05 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:39:05.850596736Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:39:06 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:39:06.054064682Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:39:06 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:39:06.054168503Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:39:06 default-k8s-diff-port-186820 cri-dockerd[1213]: time="2025-09-29T14:39:06Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:39:06 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:39:06.106982663Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:39:06 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:39:06.205066699Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:41:40 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:40.640051688Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:41:40 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:40.640094561Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:41:40 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:40.643262786Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:41:40 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:40.643306709Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:41:49 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:49.852630856Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:41:50 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:50.058766964Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:41:50 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:50.058893087Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:41:50 default-k8s-diff-port-186820 cri-dockerd[1213]: time="2025-09-29T14:41:50Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:41:53 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:53.650071444Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:41:53 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:53.741755849Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a41c3ebb08e2f       ba04bb24b9575                                                                                         8 minutes ago       Running             storage-provisioner       2                   7cbd08ca40ade       storage-provisioner
	a66150627d5dd       1611cd07b61d5                                                                                         9 minutes ago       Running             busybox                   1                   a809e47e5f523       busybox
	cfdd90547c839       138784d87c9c5                                                                                         9 minutes ago       Running             coredns                   1                   329e63b0ea158       coredns-66bc5c9577-wb8jw
	e8c1cb770762c       6fc32d66c1411                                                                                         9 minutes ago       Running             kube-proxy                1                   2d6e46f3a03ea       kube-proxy-xbpqv
	1b07bacc73620       ba04bb24b9575                                                                                         9 minutes ago       Exited              storage-provisioner       1                   7cbd08ca40ade       storage-provisioner
	19777c9fb07d6       a1894772a478e                                                                                         9 minutes ago       Running             etcd                      1                   ab8390d7e98a7       etcd-default-k8s-diff-port-186820
	2f7b7ee7a1f85       d291939e99406                                                                                         9 minutes ago       Running             kube-apiserver            1                   8b203e39b310f       kube-apiserver-default-k8s-diff-port-186820
	bf98b9af0d1be       996be7e86d9b3                                                                                         9 minutes ago       Running             kube-controller-manager   1                   339ad92a5ef7a       kube-controller-manager-default-k8s-diff-port-186820
	1befa8ef69edf       a25f5ef9c34c3                                                                                         9 minutes ago       Running             kube-scheduler            1                   71ac6bb7c0203       kube-scheduler-default-k8s-diff-port-186820
	81ed9a49211c4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   10 minutes ago      Exited              busybox                   0                   8fc319b7ece9e       busybox
	09ce2b32e9384       138784d87c9c5                                                                                         10 minutes ago      Exited              coredns                   0                   27d4ee97939c6       coredns-66bc5c9577-wb8jw
	9bcc157f5d0b5       6fc32d66c1411                                                                                         10 minutes ago      Exited              kube-proxy                0                   cc4fbe899b17c       kube-proxy-xbpqv
	f8c7812825a6e       a1894772a478e                                                                                         10 minutes ago      Exited              etcd                      0                   ddc923564de22       etcd-default-k8s-diff-port-186820
	10a7ca49cb32f       996be7e86d9b3                                                                                         10 minutes ago      Exited              kube-controller-manager   0                   e0eeed2acb2c0       kube-controller-manager-default-k8s-diff-port-186820
	4143337be7961       d291939e99406                                                                                         10 minutes ago      Exited              kube-apiserver            0                   4e6884310d1b4       kube-apiserver-default-k8s-diff-port-186820
	976b937428341       a25f5ef9c34c3                                                                                         10 minutes ago      Exited              kube-scheduler            0                   c1d647945a1fb       kube-scheduler-default-k8s-diff-port-186820
	
	
	==> coredns [09ce2b32e938] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37889 - 23687 "HINFO IN 9099155277532789114.850322349326940009. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027739509s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cfdd90547c83] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40133 - 52329 "HINFO IN 3160799206667991236.5911197496832820412. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003928481s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-186820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-186820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=default-k8s-diff-port-186820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_35_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:35:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-186820
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:45:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:42:04 +0000   Mon, 29 Sep 2025 14:34:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:42:04 +0000   Mon, 29 Sep 2025 14:34:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:42:04 +0000   Mon, 29 Sep 2025 14:34:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:42:04 +0000   Mon, 29 Sep 2025 14:35:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-186820
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 de263a2c3db04d31a5a11d96202af393
	  System UUID:                e2931296-2bdf-4282-ac79-ad3b5addc2af
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-wb8jw                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     10m
	  kube-system                 etcd-default-k8s-diff-port-186820                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         10m
	  kube-system                 kube-apiserver-default-k8s-diff-port-186820             250m (12%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-186820    200m (10%)    0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-xbpqv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-default-k8s-diff-port-186820             100m (5%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 metrics-server-746fcd58dc-nbbb9                         100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         10m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zfpvt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tdxbq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 10m                    kube-proxy       
	  Normal   Starting                 9m36s                  kube-proxy       
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Warning  CgroupV1                 10m                    kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m                    kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    10m                    kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m                    kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           10m                    node-controller  Node default-k8s-diff-port-186820 event: Registered Node default-k8s-diff-port-186820 in Controller
	  Normal   Starting                 9m46s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m46s                  kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  9m46s (x8 over 9m46s)  kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m46s (x8 over 9m46s)  kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m46s (x7 over 9m46s)  kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m46s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           9m34s                  node-controller  Node default-k8s-diff-port-186820 event: Registered Node default-k8s-diff-port-186820 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [19777c9fb07d] <==
	{"level":"warn","ts":"2025-09-29T14:35:55.266693Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.324391Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59518","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.361659Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.388287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59546","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.414630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.445169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.470783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.501021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.534435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.559773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.601975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.621776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.650672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.688877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.710190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.731716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.755574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.779014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.794347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.815239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.839666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.870291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.888255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.906275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:56.014445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59884","server-name":"","error":"EOF"}
	
	
	==> etcd [f8c7812825a6] <==
	{"level":"warn","ts":"2025-09-29T14:34:59.961012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:34:59.989611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.013961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.131238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.165362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.178718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.335890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59906","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:35:31.515574Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:35:31.515639Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-186820","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-09-29T14:35:31.515761Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:35:38.518572Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:35:38.518837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:35:38.518942Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-09-29T14:35:38.519126Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T14:35:38.519190Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T14:35:38.520466Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:35:38.520662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:35:38.520723Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T14:35:38.520937Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:35:38.521047Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:35:38.521148Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:35:38.523420Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-09-29T14:35:38.523699Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:35:38.523861Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-29T14:35:38.523984Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-186820","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 14:45:36 up  6:28,  0 users,  load average: 0.23, 0.69, 1.56
	Linux default-k8s-diff-port-186820 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2f7b7ee7a1f8] <==
	I0929 14:40:57.949133       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:41:46.656649       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:41:54.334148       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:41:57.948338       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:41:57.948384       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:41:57.948399       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:41:57.949535       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:41:57.949699       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:41:57.949722       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:43:09.746590       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:43:14.839938       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:43:57.949349       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:43:57.949439       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:43:57.949474       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:43:57.950503       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:43:57.950603       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:43:57.950618       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:44:31.954407       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:44:34.972335       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [4143337be796] <==
	W0929 14:35:40.768572       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.771050       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.772569       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.809474       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.810937       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.883220       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.914767       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.938695       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.952633       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.010308       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.041243       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.061925       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.061925       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.074342       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.085131       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.142169       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.182434       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.276796       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.288334       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.312844       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.321633       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.453520       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.511380       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.514919       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.544607       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [10a7ca49cb32] <==
	I0929 14:35:08.245993       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 14:35:08.246024       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 14:35:08.246122       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 14:35:08.246248       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 14:35:08.246518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:35:08.246534       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 14:35:08.246546       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 14:35:08.246905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 14:35:08.247039       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 14:35:08.247158       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 14:35:08.247615       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 14:35:08.248062       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 14:35:08.248483       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 14:35:08.249759       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 14:35:08.252890       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 14:35:08.252919       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 14:35:08.253268       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 14:35:08.253482       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 14:35:08.253497       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 14:35:08.253505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 14:35:08.252959       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 14:35:08.255343       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 14:35:08.263079       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-186820" podCIDRs=["10.244.0.0/24"]
	I0929 14:35:08.274345       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E0929 14:35:30.887552       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bf98b9af0d1b] <==
	I0929 14:39:32.412303       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:40:02.363869       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:40:02.420689       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:40:32.368179       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:40:32.429903       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:41:02.372571       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:41:02.438488       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:41:32.377264       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:41:32.447242       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:42:02.382376       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:42:02.455791       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:42:32.387308       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:42:32.464215       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:43:02.391798       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:43:02.472299       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:43:32.396643       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:43:32.481172       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:44:02.401471       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:44:02.489573       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:44:32.406010       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:44:32.497361       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:45:02.411040       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:45:02.506894       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:45:32.416192       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:45:32.514197       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [9bcc157f5d0b] <==
	I0929 14:35:10.165639       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:35:10.306443       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:35:10.407234       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:35:10.407291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:35:10.407379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:35:10.454449       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:35:10.454585       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:35:10.482598       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:35:10.483189       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:35:10.483207       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:35:10.487889       1 config.go:200] "Starting service config controller"
	I0929 14:35:10.487906       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:35:10.500758       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:35:10.500830       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:35:10.500868       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:35:10.500873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:35:10.502037       1 config.go:309] "Starting node config controller"
	I0929 14:35:10.502047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:35:10.502055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:35:10.589794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:35:10.601734       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:35:10.601768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e8c1cb770762] <==
	I0929 14:35:59.296846       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:35:59.378257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:35:59.478710       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:35:59.478747       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:35:59.478880       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:35:59.506784       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:35:59.506846       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:35:59.521655       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:35:59.522106       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:35:59.522130       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:35:59.523731       1 config.go:200] "Starting service config controller"
	I0929 14:35:59.523747       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:35:59.523763       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:35:59.523767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:35:59.523789       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:35:59.523793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:35:59.528361       1 config.go:309] "Starting node config controller"
	I0929 14:35:59.528400       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:35:59.528409       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:35:59.627795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:35:59.627912       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:35:59.627937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1befa8ef69ed] <==
	I0929 14:35:54.618445       1 serving.go:386] Generated self-signed cert in-memory
	W0929 14:35:56.799923       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 14:35:56.799966       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:35:56.799977       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 14:35:56.799985       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 14:35:56.977173       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 14:35:56.977203       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:35:56.979841       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 14:35:56.979948       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:56.979971       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:56.979994       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 14:35:57.080386       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [976b93742834] <==
	E0929 14:35:01.362533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:35:01.362617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:35:01.362822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 14:35:01.362945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 14:35:01.363017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:35:01.363047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:35:01.363131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 14:35:01.362858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:35:01.363213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:35:02.178153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:35:02.197115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:35:02.251908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:35:02.290175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:35:02.370997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:35:02.397453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:35:02.481809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:35:02.500622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 14:35:02.525150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0929 14:35:04.414622       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:31.491694       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 14:35:31.491803       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 14:35:31.491814       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 14:35:31.491833       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:31.492128       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 14:35:31.492146       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 14:43:50 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:43:50.618900    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:43:52 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:43:52.615659    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:43:59 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:43:59.609178    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:44:02 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:02.619759    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:44:07 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:07.608157    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:44:12 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:12.611492    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:44:17 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:17.608454    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:44:21 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:21.608833    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:44:24 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:24.615331    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:44:28 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:28.610471    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:44:35 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:35.608776    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:44:36 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:36.610454    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:44:41 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:41.609439    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:44:50 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:50.613263    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:44:51 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:51.609688    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:44:55 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:44:55.608420    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:45:02 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:02.615919    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:45:05 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:05.609406    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:45:09 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:09.608736    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:45:16 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:16.610040    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:45:17 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:17.608249    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:45:21 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:21.608731    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:45:30 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:30.617508    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:45:32 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:32.611967    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:45:32 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:45:32.615963    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	
	
	==> storage-provisioner [1b07bacc7362] <==
	I0929 14:35:59.254007       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:36:29.256948       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a41c3ebb08e2] <==
	W0929 14:45:12.805203       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:14.808255       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:14.814928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:16.817870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:16.822705       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:18.826675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:18.832258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:20.835873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:20.842346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:22.845938       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:22.854024       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:24.857873       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:24.862923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:26.866803       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:26.873286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:28.876970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:28.881825       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:30.885041       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:30.891566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:32.894720       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:32.899758       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:34.903770       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:34.910689       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:36.914566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:45:36.922793       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 describe pod metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-186820 describe pod metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq: exit status 1 (85.78685ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-nbbb9" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zfpvt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-tdxbq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-186820 describe pod metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-47mqf" [da179c3b-5a5b-452e-9da4-57b22177fba3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 14:43:42.974551 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:43:47.930092 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:43:59.883433 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:44:02.358803 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:45:01.311618 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:45:03.684562 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:45:06.520948 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:45:20.566368 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:337: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-641794 -n embed-certs-641794
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:52:31.152304396 +0000 UTC m=+6644.417517757
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-641794 describe po kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context embed-certs-641794 describe po kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-47mqf
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             embed-certs-641794/192.168.85.2
Start Time:       Mon, 29 Sep 2025 14:33:53 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-pkbff (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-pkbff:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf to embed-certs-641794
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m33s (x63 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m33s (x63 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context embed-certs-641794 logs kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-641794 logs kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard: exit status 1 (102.503508ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-47mqf" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context embed-certs-641794 logs kubernetes-dashboard-855c9754f9-47mqf -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-641794 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-641794
helpers_test.go:243: (dbg) docker inspect embed-certs-641794:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24",
	        "Created": "2025-09-29T14:31:58.13596895Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1580073,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:33:28.565493532Z",
	            "FinishedAt": "2025-09-29T14:33:27.626783812Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/hostname",
	        "HostsPath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/hosts",
	        "LogPath": "/var/lib/docker/containers/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24/c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24-json.log",
	        "Name": "/embed-certs-641794",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-641794:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "embed-certs-641794",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "c24f0a72b68640725fcc53cf00b26b499756b095b48e0b83480d8ac76e5d1c24",
	                "LowerDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f7521dcd4374cc4c43cd92a8c207215d5eafc426d44f484d6c35dedf86164c6b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-641794",
	                "Source": "/var/lib/docker/volumes/embed-certs-641794/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-641794",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-641794",
	                "name.minikube.sigs.k8s.io": "embed-certs-641794",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9ab2ba6a1681f3a6c8cd864ec56c876c496c43306607503628dde6d15c66dd7c",
	            "SandboxKey": "/var/run/docker/netns/9ab2ba6a1681",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34306"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34307"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34310"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34308"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34309"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-641794": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "8a:8e:0b:08:c6:69",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "74b92272e8b5ea380f2a2d8d88cf9058f170799fc14d15f976032de06e56e31f",
	                    "EndpointID": "3c69a303efe3b9fceec361df024343f7061d3e5f84cf3f88621ba1b0c92ed18c",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-641794",
	                        "c24f0a72b686"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-641794 -n embed-certs-641794
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-641794 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p embed-certs-641794 logs -n 25: (1.345377124s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ image   │ no-preload-983174 image list --format=json                                                                                                                                                                                                      │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ pause   │ -p no-preload-983174 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ unpause │ -p no-preload-983174 --alsologtostderr -v=1                                                                                                                                                                                                     │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p no-preload-983174                                                                                                                                                                                                                            │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p no-preload-983174                                                                                                                                                                                                                            │ no-preload-983174            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ delete  │ -p disable-driver-mounts-627946                                                                                                                                                                                                                 │ disable-driver-mounts-627946 │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-641794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p embed-certs-641794 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-641794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-093064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p newest-cni-093064 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-093064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ image   │ newest-cni-093064 image list --format=json                                                                                                                                                                                                      │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ pause   │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ unpause │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-186820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ stop    │ -p default-k8s-diff-port-186820 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-186820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:36 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:35:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:35:42.456122 1596062 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:35:42.456362 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456395 1596062 out.go:374] Setting ErrFile to fd 2...
	I0929 14:35:42.456415 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456738 1596062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:35:42.457163 1596062 out.go:368] Setting JSON to false
	I0929 14:35:42.458288 1596062 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22695,"bootTime":1759133848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:35:42.458402 1596062 start.go:140] virtualization:  
	I0929 14:35:42.462007 1596062 out.go:179] * [default-k8s-diff-port-186820] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:35:42.465793 1596062 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:35:42.465926 1596062 notify.go:220] Checking for updates...
	I0929 14:35:42.471729 1596062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:35:42.474683 1596062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:42.477543 1596062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:35:42.480431 1596062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:35:42.483237 1596062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:35:42.486711 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:42.487301 1596062 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:35:42.514877 1596062 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:35:42.515008 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.572860 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.562452461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.572973 1596062 docker.go:318] overlay module found
	I0929 14:35:42.576085 1596062 out.go:179] * Using the docker driver based on existing profile
	I0929 14:35:42.578939 1596062 start.go:304] selected driver: docker
	I0929 14:35:42.578961 1596062 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.579120 1596062 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:35:42.579853 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.635895 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.626575461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.636238 1596062 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:35:42.636278 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:42.636347 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:42.636386 1596062 start.go:348] cluster config:
	{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.641605 1596062 out.go:179] * Starting "default-k8s-diff-port-186820" primary control-plane node in "default-k8s-diff-port-186820" cluster
	I0929 14:35:42.645130 1596062 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:35:42.648466 1596062 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:35:42.651441 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:42.651462 1596062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:35:42.651506 1596062 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 14:35:42.651523 1596062 cache.go:58] Caching tarball of preloaded images
	I0929 14:35:42.651603 1596062 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 14:35:42.651613 1596062 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 14:35:42.651737 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:42.671234 1596062 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:35:42.671260 1596062 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:35:42.671281 1596062 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:35:42.671312 1596062 start.go:360] acquireMachinesLock for default-k8s-diff-port-186820: {Name:mk14ee05a72e1bc87d0193bcc4d30163df297691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:35:42.671385 1596062 start.go:364] duration metric: took 48.354µs to acquireMachinesLock for "default-k8s-diff-port-186820"
	I0929 14:35:42.671408 1596062 start.go:96] Skipping create...Using existing machine configuration
	I0929 14:35:42.671416 1596062 fix.go:54] fixHost starting: 
	I0929 14:35:42.671679 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:42.688259 1596062 fix.go:112] recreateIfNeeded on default-k8s-diff-port-186820: state=Stopped err=<nil>
	W0929 14:35:42.688293 1596062 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 14:35:42.691565 1596062 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-186820" ...
	I0929 14:35:42.691663 1596062 cli_runner.go:164] Run: docker start default-k8s-diff-port-186820
	I0929 14:35:42.980213 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:43.009181 1596062 kic.go:430] container "default-k8s-diff-port-186820" state is running.
	I0929 14:35:43.009618 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:43.038170 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:43.038413 1596062 machine.go:93] provisionDockerMachine start ...
	I0929 14:35:43.038482 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:43.061723 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:43.062111 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:43.062127 1596062 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:35:43.062747 1596062 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41000->127.0.0.1:34321: read: connection reset by peer
	I0929 14:35:46.204046 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.204073 1596062 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-186820"
	I0929 14:35:46.204141 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.222056 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.222389 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.222406 1596062 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-186820 && echo "default-k8s-diff-port-186820" | sudo tee /etc/hostname
	I0929 14:35:46.377247 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.377348 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.397114 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.397485 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.397509 1596062 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-186820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-186820/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-186820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:35:46.537135 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:46.537160 1596062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:35:46.537236 1596062 ubuntu.go:190] setting up certificates
	I0929 14:35:46.537245 1596062 provision.go:84] configureAuth start
	I0929 14:35:46.537316 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:46.558841 1596062 provision.go:143] copyHostCerts
	I0929 14:35:46.558910 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:35:46.558934 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:35:46.559026 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:35:46.559142 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:35:46.559154 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:35:46.559183 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:35:46.559251 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:35:46.559260 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:35:46.559289 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:35:46.559350 1596062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-186820 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-186820 localhost minikube]
	I0929 14:35:46.733893 1596062 provision.go:177] copyRemoteCerts
	I0929 14:35:46.733959 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:35:46.733998 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.755356 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:46.858489 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:35:46.883909 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 14:35:46.910465 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 14:35:46.942412 1596062 provision.go:87] duration metric: took 405.141346ms to configureAuth
	I0929 14:35:46.942438 1596062 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:35:46.942640 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:46.942699 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.959513 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.959825 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.959842 1596062 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:35:47.108999 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:35:47.109020 1596062 ubuntu.go:71] root file system type: overlay
	I0929 14:35:47.109131 1596062 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:35:47.109201 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.126915 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.127240 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.127365 1596062 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:35:47.281272 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:35:47.281364 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.299262 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.299576 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.299606 1596062 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:35:47.450591 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:47.450619 1596062 machine.go:96] duration metric: took 4.41218926s to provisionDockerMachine
	I0929 14:35:47.450630 1596062 start.go:293] postStartSetup for "default-k8s-diff-port-186820" (driver="docker")
	I0929 14:35:47.450641 1596062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:35:47.450716 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:35:47.450765 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.470252 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.570022 1596062 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:35:47.573521 1596062 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:35:47.573556 1596062 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:35:47.573567 1596062 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:35:47.573574 1596062 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:35:47.573585 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:35:47.573643 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:35:47.573731 1596062 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:35:47.573850 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:35:47.582484 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:47.607719 1596062 start.go:296] duration metric: took 157.074022ms for postStartSetup
	I0929 14:35:47.607821 1596062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:35:47.607869 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.624930 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.721416 1596062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:35:47.725935 1596062 fix.go:56] duration metric: took 5.054511148s for fixHost
	I0929 14:35:47.725957 1596062 start.go:83] releasing machines lock for "default-k8s-diff-port-186820", held for 5.054560232s
	I0929 14:35:47.726022 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:47.743658 1596062 ssh_runner.go:195] Run: cat /version.json
	I0929 14:35:47.743708 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.743985 1596062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:35:47.744046 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.767655 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.776135 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.868074 1596062 ssh_runner.go:195] Run: systemctl --version
	I0929 14:35:48.003111 1596062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:35:48.010051 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:35:48.037046 1596062 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:35:48.037127 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:35:48.046790 1596062 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 14:35:48.046821 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.046855 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.046959 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.064298 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:35:48.077373 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:35:48.087939 1596062 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.088011 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:35:48.099214 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.109800 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:35:48.119860 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.129709 1596062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:35:48.140034 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:35:48.151023 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:35:48.162212 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:35:48.173065 1596062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:35:48.182304 1596062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:35:48.191122 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.275156 1596062 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:35:48.388383 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.388435 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.388487 1596062 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:35:48.403898 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.417945 1596062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:35:48.450429 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.462890 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:35:48.476336 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.497267 1596062 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:35:48.501572 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:35:48.513810 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:35:48.548394 1596062 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:35:48.651762 1596062 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:35:48.744803 1596062 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.744903 1596062 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:35:48.765355 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:35:48.778732 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.873398 1596062 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:35:49.382274 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:35:49.394500 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:35:49.406617 1596062 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 14:35:49.420787 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.432705 1596062 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:35:49.525907 1596062 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:35:49.612769 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.715560 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:35:49.731642 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:35:49.743392 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.840499 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:35:49.933414 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.952842 1596062 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:35:49.952912 1596062 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:35:49.956643 1596062 start.go:563] Will wait 60s for crictl version
	I0929 14:35:49.956708 1596062 ssh_runner.go:195] Run: which crictl
	I0929 14:35:49.960634 1596062 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:35:50.005514 1596062 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:35:50.005607 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.035266 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.064977 1596062 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:35:50.065096 1596062 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-186820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:35:50.085518 1596062 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:35:50.090438 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.104259 1596062 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:35:50.104391 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:50.104452 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.126383 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.126409 1596062 docker.go:621] Images already preloaded, skipping extraction
	I0929 14:35:50.126472 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.146276 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.146318 1596062 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:35:50.146329 1596062 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 docker true true} ...
	I0929 14:35:50.146441 1596062 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-186820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:35:50.146513 1596062 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:35:50.200396 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:50.200426 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:50.200440 1596062 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:35:50.200460 1596062 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-186820 NodeName:default-k8s-diff-port-186820 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:35:50.200650 1596062 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-186820"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:35:50.200727 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:35:50.210044 1596062 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:35:50.210118 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:35:50.219378 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0929 14:35:50.237028 1596062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:35:50.255641 1596062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0929 14:35:50.274465 1596062 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:35:50.278275 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.289351 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:50.378347 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:50.393916 1596062 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820 for IP: 192.168.76.2
	I0929 14:35:50.393942 1596062 certs.go:194] generating shared ca certs ...
	I0929 14:35:50.393959 1596062 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:50.394101 1596062 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:35:50.394152 1596062 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:35:50.394164 1596062 certs.go:256] generating profile certs ...
	I0929 14:35:50.394266 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/client.key
	I0929 14:35:50.394344 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key.3abc893e
	I0929 14:35:50.394410 1596062 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key
	I0929 14:35:50.394524 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:35:50.394563 1596062 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:35:50.394576 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:35:50.394602 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:35:50.394627 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:35:50.394652 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:35:50.394699 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:50.395324 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:35:50.425482 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:35:50.458821 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:35:50.492420 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:35:50.551343 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 14:35:50.605319 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 14:35:50.639423 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:35:50.678207 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:35:50.718215 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:35:50.747191 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:35:50.779504 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:35:50.809480 1596062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:35:50.830273 1596062 ssh_runner.go:195] Run: openssl version
	I0929 14:35:50.836472 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:35:50.848203 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.851953 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.852017 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.859388 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:35:50.868867 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:35:50.878588 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882188 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882261 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.890114 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:35:50.899476 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:35:50.909249 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913394 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913486 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.921135 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:35:50.930563 1596062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:35:50.934410 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 14:35:50.941795 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 14:35:50.950427 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 14:35:50.960816 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 14:35:50.970602 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 14:35:50.977819 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 14:35:50.985284 1596062 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:50.985429 1596062 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:35:51.006801 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:35:51.025256 1596062 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 14:35:51.025334 1596062 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 14:35:51.025424 1596062 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 14:35:51.041400 1596062 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 14:35:51.042316 1596062 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-186820" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.042910 1596062 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1125775/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-186820" cluster setting kubeconfig missing "default-k8s-diff-port-186820" context setting]
	I0929 14:35:51.043713 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.045723 1596062 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 14:35:51.061546 1596062 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 14:35:51.061580 1596062 kubeadm.go:593] duration metric: took 36.227514ms to restartPrimaryControlPlane
	I0929 14:35:51.061589 1596062 kubeadm.go:394] duration metric: took 76.316349ms to StartCluster
	I0929 14:35:51.061606 1596062 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.061666 1596062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.063237 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.063476 1596062 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:35:51.063781 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:51.063837 1596062 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:35:51.063907 1596062 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.063922 1596062 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.063934 1596062 addons.go:247] addon storage-provisioner should already be in state true
	I0929 14:35:51.063956 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.064489 1596062 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.064568 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.064581 1596062 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-186820"
	I0929 14:35:51.064928 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.067934 1596062 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.067967 1596062 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.067974 1596062 addons.go:247] addon metrics-server should already be in state true
	I0929 14:35:51.068006 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.068449 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.069089 1596062 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.069110 1596062 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.069117 1596062 addons.go:247] addon dashboard should already be in state true
	I0929 14:35:51.069143 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.069590 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.076810 1596062 out.go:179] * Verifying Kubernetes components...
	I0929 14:35:51.091555 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:51.118136 1596062 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:35:51.125122 1596062 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.125149 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:35:51.125225 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.164326 1596062 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.164353 1596062 addons.go:247] addon default-storageclass should already be in state true
	I0929 14:35:51.164390 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.170550 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.184841 1596062 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 14:35:51.190867 1596062 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199347 1596062 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199401 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 14:35:51.205983 1596062 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 14:35:51.206084 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.202823 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.213345 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 14:35:51.213391 1596062 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 14:35:51.213484 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.230915 1596062 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.230936 1596062 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:35:51.230996 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.269958 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.296608 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.306953 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.321614 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:51.387857 1596062 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:51.488310 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.584676 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 14:35:51.584747 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 14:35:51.636648 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.656953 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 14:35:51.656977 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 14:35:51.769528 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 14:35:51.769551 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0929 14:35:51.776704 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.776767 1596062 retry.go:31] will retry after 176.889773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.799383 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 14:35:51.799417 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 14:35:51.919355 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 14:35:51.919384 1596062 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 14:35:51.953840 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.958674 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 14:35:51.958698 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 14:35:51.997497 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:51.997523 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 14:35:52.312165 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:52.398850 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 14:35:52.398879 1596062 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 14:35:52.469654 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469690 1596062 retry.go:31] will retry after 160.704677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 14:35:52.469763 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469777 1596062 retry.go:31] will retry after 381.313638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.566150 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 14:35:52.566178 1596062 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 14:35:52.631374 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:52.752298 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 14:35:52.752376 1596062 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0929 14:35:52.812288 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.812366 1596062 retry.go:31] will retry after 303.64621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.851712 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:52.884643 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 14:35:52.884713 1596062 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 14:35:53.087320 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:53.087401 1596062 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 14:35:53.116319 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:53.151041 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:56.942553 1596062 node_ready.go:49] node "default-k8s-diff-port-186820" is "Ready"
	I0929 14:35:56.942583 1596062 node_ready.go:38] duration metric: took 5.554681325s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:56.942602 1596062 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:35:56.942665 1596062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:35:57.186445 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.55502845s)
	I0929 14:35:59.647559 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.795763438s)
	I0929 14:35:59.694900 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.578497303s)
	I0929 14:35:59.694937 1596062 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-186820"
	I0929 14:35:59.695034 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.543910818s)
	I0929 14:35:59.695216 1596062 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.752538922s)
	I0929 14:35:59.695237 1596062 api_server.go:72] duration metric: took 8.631722688s to wait for apiserver process to appear ...
	I0929 14:35:59.695243 1596062 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:35:59.695260 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:35:59.698283 1596062 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-186820 addons enable metrics-server
	
	I0929 14:35:59.701228 1596062 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0929 14:35:59.704363 1596062 addons.go:514] duration metric: took 8.640511326s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0929 14:35:59.704573 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:35:59.704591 1596062 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:36:00.200300 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:36:00.235965 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 14:36:00.246294 1596062 api_server.go:141] control plane version: v1.34.0
	I0929 14:36:00.246322 1596062 api_server.go:131] duration metric: took 551.072592ms to wait for apiserver health ...
	I0929 14:36:00.246333 1596062 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:36:00.258786 1596062 system_pods.go:59] 8 kube-system pods found
	I0929 14:36:00.258905 1596062 system_pods.go:61] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.258925 1596062 system_pods.go:61] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.258935 1596062 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.258944 1596062 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.259016 1596062 system_pods.go:61] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.259074 1596062 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.259092 1596062 system_pods.go:61] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.259101 1596062 system_pods.go:61] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.259111 1596062 system_pods.go:74] duration metric: took 12.770585ms to wait for pod list to return data ...
	I0929 14:36:00.259168 1596062 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:36:00.267463 1596062 default_sa.go:45] found service account: "default"
	I0929 14:36:00.267489 1596062 default_sa.go:55] duration metric: took 8.313947ms for default service account to be created ...
	I0929 14:36:00.267500 1596062 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:36:00.275897 1596062 system_pods.go:86] 8 kube-system pods found
	I0929 14:36:00.276012 1596062 system_pods.go:89] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.276046 1596062 system_pods.go:89] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.276089 1596062 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.276122 1596062 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.276164 1596062 system_pods.go:89] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.276193 1596062 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.276220 1596062 system_pods.go:89] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.276263 1596062 system_pods.go:89] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.276302 1596062 system_pods.go:126] duration metric: took 8.789614ms to wait for k8s-apps to be running ...
	I0929 14:36:00.276347 1596062 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:36:00.276463 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:36:00.322130 1596062 system_svc.go:56] duration metric: took 45.77635ms WaitForService to wait for kubelet
	I0929 14:36:00.322171 1596062 kubeadm.go:578] duration metric: took 9.258650816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:36:00.322195 1596062 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:36:00.330255 1596062 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:36:00.330363 1596062 node_conditions.go:123] node cpu capacity is 2
	I0929 14:36:00.330378 1596062 node_conditions.go:105] duration metric: took 8.17742ms to run NodePressure ...
	I0929 14:36:00.330394 1596062 start.go:241] waiting for startup goroutines ...
	I0929 14:36:00.330402 1596062 start.go:246] waiting for cluster config update ...
	I0929 14:36:00.330414 1596062 start.go:255] writing updated cluster config ...
	I0929 14:36:00.330883 1596062 ssh_runner.go:195] Run: rm -f paused
	I0929 14:36:00.336791 1596062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:00.352867 1596062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:36:02.362537 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:04.859542 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:06.860829 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:09.359186 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:11.859196 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:14.358754 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:16.859093 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:19.358587 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:21.362560 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:23.858978 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:25.863368 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:27.868276 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:30.358700 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:32.358763 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	I0929 14:36:32.858935 1596062 pod_ready.go:94] pod "coredns-66bc5c9577-wb8jw" is "Ready"
	I0929 14:36:32.858962 1596062 pod_ready.go:86] duration metric: took 32.506066188s for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.862337 1596062 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.868713 1596062 pod_ready.go:94] pod "etcd-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.868746 1596062 pod_ready.go:86] duration metric: took 6.378054ms for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.871570 1596062 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.876378 1596062 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.876410 1596062 pod_ready.go:86] duration metric: took 4.809833ms for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.879056 1596062 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.057602 1596062 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:33.057631 1596062 pod_ready.go:86] duration metric: took 178.552151ms for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.256851 1596062 pod_ready.go:83] waiting for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.657271 1596062 pod_ready.go:94] pod "kube-proxy-xbpqv" is "Ready"
	I0929 14:36:33.657301 1596062 pod_ready.go:86] duration metric: took 400.41966ms for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.857548 1596062 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256475 1596062 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:34.256548 1596062 pod_ready.go:86] duration metric: took 398.968386ms for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256562 1596062 pod_ready.go:40] duration metric: took 33.919672235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:34.315168 1596062 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:36:34.318274 1596062 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-186820" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:39:38 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:38.958317187Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:39:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:41.053028152Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:39:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:41.247792180Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:39:41 embed-certs-641794 dockerd[904]: time="2025-09-29T14:39:41.248047666Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:39:41 embed-certs-641794 cri-dockerd[1218]: time="2025-09-29T14:39:41Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:44:29 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:29.854957004Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:44:29 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:29.854995077Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:44:29 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:29.858492708Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:44:29 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:29.858531847Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:44:43 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:43.881820167Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:44:43 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:43.974756386Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:44:54 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:54.083110807Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:44:54 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:54.273619003Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:44:54 embed-certs-641794 dockerd[904]: time="2025-09-29T14:44:54.273719533Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:44:54 embed-certs-641794 cri-dockerd[1218]: time="2025-09-29T14:44:54Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:49:31 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:31.861629369Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:49:31 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:31.862188433Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:49:31 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:31.864942528Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:49:31 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:31.864987222Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host"
	Sep 29 14:49:50 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:50.880108835Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:49:50 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:50.969781618Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:49:59 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:59.061261084Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:49:59 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:59.265594769Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:49:59 embed-certs-641794 dockerd[904]: time="2025-09-29T14:49:59.265761229Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:49:59 embed-certs-641794 cri-dockerd[1218]: time="2025-09-29T14:49:59Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a5e3d225d2637       ba04bb24b9575                                                                                         17 minutes ago      Running             storage-provisioner       2                   041045876cfb0       storage-provisioner
	a9885e4fb4ba1       138784d87c9c5                                                                                         18 minutes ago      Running             coredns                   1                   cff4b2a992f48       coredns-66bc5c9577-hmpmx
	4aa49530e7852       1611cd07b61d5                                                                                         18 minutes ago      Running             busybox                   1                   92ed894df452f       busybox
	25d990cb20b46       6fc32d66c1411                                                                                         18 minutes ago      Running             kube-proxy                1                   40a9a09bfa7d0       kube-proxy-hq49j
	00b043c910b1a       ba04bb24b9575                                                                                         18 minutes ago      Exited              storage-provisioner       1                   041045876cfb0       storage-provisioner
	baae0522a93f6       a25f5ef9c34c3                                                                                         18 minutes ago      Running             kube-scheduler            1                   a04885a4a6910       kube-scheduler-embed-certs-641794
	2185abcc460ea       d291939e99406                                                                                         18 minutes ago      Running             kube-apiserver            1                   cbcd0c87d851f       kube-apiserver-embed-certs-641794
	1c2e294415724       a1894772a478e                                                                                         18 minutes ago      Running             etcd                      1                   4f218b0a654d2       etcd-embed-certs-641794
	e50bb88c331ce       996be7e86d9b3                                                                                         18 minutes ago      Running             kube-controller-manager   1                   f0f1ad1622dd8       kube-controller-manager-embed-certs-641794
	a7b027d5e346a       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              busybox                   0                   89cf68d9c09e9       busybox
	391289f0044ce       138784d87c9c5                                                                                         20 minutes ago      Exited              coredns                   0                   abee1b5aa3928       coredns-66bc5c9577-hmpmx
	9383fb7d79769       6fc32d66c1411                                                                                         20 minutes ago      Exited              kube-proxy                0                   385ffe7bbb81c       kube-proxy-hq49j
	5296549dc4c62       a25f5ef9c34c3                                                                                         20 minutes ago      Exited              kube-scheduler            0                   1dcc63798c9bc       kube-scheduler-embed-certs-641794
	b429eaa7a04a7       996be7e86d9b3                                                                                         20 minutes ago      Exited              kube-controller-manager   0                   a1b6ef32508ff       kube-controller-manager-embed-certs-641794
	cb04aa5bbfcb3       d291939e99406                                                                                         20 minutes ago      Exited              kube-apiserver            0                   4dd363fb0319e       kube-apiserver-embed-certs-641794
	ba24bc9023aca       a1894772a478e                                                                                         20 minutes ago      Exited              etcd                      0                   a5913658e6bcd       etcd-embed-certs-641794
	
	
	==> coredns [391289f0044c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	[INFO] Reloading complete
	[INFO] 127.0.0.1:49821 - 4211 "HINFO IN 2052675392540096316.4631226984732371511. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011670386s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a9885e4fb4ba] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50006 - 64683 "HINFO IN 6825412484363477214.2882775065589529508. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.065637833s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-641794
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=embed-certs-641794
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=embed-certs-641794
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_32_25_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:32:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-641794
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:52:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:50:28 +0000   Mon, 29 Sep 2025 14:32:17 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:50:28 +0000   Mon, 29 Sep 2025 14:32:17 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:50:28 +0000   Mon, 29 Sep 2025 14:32:17 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:50:28 +0000   Mon, 29 Sep 2025 14:32:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-641794
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 21b44f1d4bdf4527aca15effb6a3cb47
	  System UUID:                15b26991-5060-468d-89e2-2473f52c87e3
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-hmpmx                      100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     20m
	  kube-system                 etcd-embed-certs-641794                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         20m
	  kube-system                 kube-apiserver-embed-certs-641794             250m (12%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-controller-manager-embed-certs-641794    200m (10%)    0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-proxy-hq49j                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 kube-scheduler-embed-certs-641794             100m (5%)     0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 metrics-server-746fcd58dc-rns62               100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-stm84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-47mqf         0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 20m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node embed-certs-641794 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node embed-certs-641794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node embed-certs-641794 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  20m                kubelet          Node embed-certs-641794 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 20m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    20m                kubelet          Node embed-certs-641794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     20m                kubelet          Node embed-certs-641794 status is now: NodeHasSufficientPID
	  Normal   Starting                 20m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           20m                node-controller  Node embed-certs-641794 event: Registered Node embed-certs-641794 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-641794 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-641794 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-641794 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-641794 event: Registered Node embed-certs-641794 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [1c2e29441572] <==
	{"level":"warn","ts":"2025-09-29T14:33:45.642091Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.676045Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.701700Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.746335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.776602Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.816594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.854660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.871859Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.915370Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.943858Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44772","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:45.962838Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.036593Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44814","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.056302Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.079273Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44856","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.166188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.180625Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.214021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.252782Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:33:46.363087Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44948","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:43:43.771401Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1087}
	{"level":"info","ts":"2025-09-29T14:43:43.796753Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1087,"took":"24.957176ms","hash":241400389,"current-db-size-bytes":3313664,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1388544,"current-db-size-in-use":"1.4 MB"}
	{"level":"info","ts":"2025-09-29T14:43:43.796807Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":241400389,"revision":1087,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T14:48:43.777427Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1345}
	{"level":"info","ts":"2025-09-29T14:48:43.781412Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1345,"took":"3.700594ms","hash":559676900,"current-db-size-bytes":3313664,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1794048,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T14:48:43.781463Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":559676900,"revision":1345,"compact-revision":1087}
	
	
	==> etcd [ba24bc9023ac] <==
	{"level":"warn","ts":"2025-09-29T14:32:20.881274Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.898124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.911971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.940347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.954832Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:20.970615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45882","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:32:21.045343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45910","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:33:17.103574Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:33:17.103630Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"embed-certs-641794","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	{"level":"error","ts":"2025-09-29T14:33:17.103728Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:33:24.116116Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:33:24.118597Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:33:24.118720Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"9f0758e1c58a86ed","current-leader-member-id":"9f0758e1c58a86ed"}
	{"level":"info","ts":"2025-09-29T14:33:24.118935Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-29T14:33:24.118990Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-29T14:33:24.119923Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:33:24.119983Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:33:24.119993Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T14:33:24.120261Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:33:24.120288Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.85.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:33:24.120296Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:33:24.123406Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"error","ts":"2025-09-29T14:33:24.123689Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.85.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:33:24.123823Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.85.2:2380"}
	{"level":"info","ts":"2025-09-29T14:33:24.123911Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"embed-certs-641794","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.85.2:2380"],"advertise-client-urls":["https://192.168.85.2:2379"]}
	
	
	==> kernel <==
	 14:52:32 up  6:35,  0 users,  load average: 0.95, 0.71, 1.23
	Linux embed-certs-641794 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2185abcc460e] <==
	I0929 14:48:49.303912       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:49:06.737680       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:49:48.361597       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:49:49.303377       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:49:49.303430       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:49:49.303444       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:49:49.304579       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:49:49.304845       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:49:49.304865       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:50:11.389524       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:51:06.294694       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:51:25.874388       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:51:49.304322       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:51:49.304434       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:51:49.304474       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:51:49.305364       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:51:49.305527       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:51:49.305543       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:52:30.119808       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [cb04aa5bbfcb] <==
	W0929 14:33:26.687035       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.709711       1 logging.go:55] [core] [Channel #83 SubChannel #85]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.746514       1 logging.go:55] [core] [Channel #119 SubChannel #121]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.772916       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.794840       1 logging.go:55] [core] [Channel #9 SubChannel #11]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.811794       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.814269       1 logging.go:55] [core] [Channel #2 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.823237       1 logging.go:55] [core] [Channel #75 SubChannel #77]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.828728       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.841665       1 logging.go:55] [core] [Channel #13 SubChannel #15]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.848291       1 logging.go:55] [core] [Channel #31 SubChannel #33]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.877451       1 logging.go:55] [core] [Channel #27 SubChannel #29]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.884451       1 logging.go:55] [core] [Channel #219 SubChannel #221]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.931306       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.936829       1 logging.go:55] [core] [Channel #95 SubChannel #97]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:26.989159       1 logging.go:55] [core] [Channel #111 SubChannel #113]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.000807       1 logging.go:55] [core] [Channel #59 SubChannel #61]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.029478       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.034002       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.034443       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.057423       1 logging.go:55] [core] [Channel #203 SubChannel #205]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.061052       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.090381       1 logging.go:55] [core] [Channel #39 SubChannel #41]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.182651       1 logging.go:55] [core] [Channel #183 SubChannel #185]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:33:27.244168       1 logging.go:55] [core] [Channel #127 SubChannel #129]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [b429eaa7a04a] <==
	I0929 14:32:28.789735       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0929 14:32:28.798426       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0929 14:32:28.798473       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:32:28.798729       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 14:32:28.798745       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 14:32:28.799222       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0929 14:32:28.799294       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0929 14:32:28.799367       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-641794"
	I0929 14:32:28.799406       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0929 14:32:28.800012       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 14:32:28.800171       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0929 14:32:28.800397       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0929 14:32:28.800560       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 14:32:28.800688       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0929 14:32:28.800776       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 14:32:28.801197       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0929 14:32:28.801346       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0929 14:32:28.803391       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 14:32:28.808620       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0929 14:32:28.813275       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0929 14:32:28.813286       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0929 14:32:28.820941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:32:28.851177       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 14:32:28.852409       1 shared_informer.go:356] "Caches are synced" controller="service account"
	E0929 14:33:16.312316       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [e50bb88c331c] <==
	I0929 14:46:21.838906       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:46:51.577024       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:46:51.853475       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:47:21.582196       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:47:21.861309       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:47:51.586309       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:47:51.868494       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:48:21.591360       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:48:21.876616       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:48:51.596288       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:48:51.883625       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:49:21.601336       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:49:21.891073       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:49:51.606543       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:49:51.898708       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:50:21.610600       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:50:21.910383       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:50:51.615218       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:50:51.918562       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:51:21.619825       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:51:21.925853       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:51:51.624629       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:51:51.933740       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:52:21.630115       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:52:21.940764       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [25d990cb20b4] <==
	I0929 14:33:51.582317       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:33:51.835311       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:33:51.937436       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:33:51.937477       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 14:33:51.937552       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:33:52.021008       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:33:52.021075       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:33:52.030164       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:33:52.030935       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:33:52.030954       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:33:52.033087       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:33:52.033102       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:33:52.033506       1 config.go:200] "Starting service config controller"
	I0929 14:33:52.033514       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:33:52.038293       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:33:52.038315       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:33:52.054416       1 config.go:309] "Starting node config controller"
	I0929 14:33:52.054436       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:33:52.054443       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:33:52.134550       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:33:52.134610       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 14:33:52.140572       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [9383fb7d7976] <==
	I0929 14:32:30.991256       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:32:31.098266       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:32:31.199388       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:32:31.199424       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0929 14:32:31.199486       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:32:31.244428       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:32:31.244520       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:32:31.254959       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:32:31.255262       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:32:31.255296       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:32:31.256408       1 config.go:200] "Starting service config controller"
	I0929 14:32:31.256427       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:32:31.256855       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:32:31.256871       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:32:31.256891       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:32:31.256895       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:32:31.260983       1 config.go:309] "Starting node config controller"
	I0929 14:32:31.260997       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:32:31.261005       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:32:31.356607       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:32:31.365283       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0929 14:32:31.365551       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5296549dc4c6] <==
	E0929 14:32:21.891748       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:32:21.892196       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:32:22.691094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:32:22.693972       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 14:32:22.739274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 14:32:22.861406       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:32:22.873234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 14:32:22.912269       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:32:22.915835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:32:22.921217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:32:22.926192       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:32:22.934099       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 14:32:22.965852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 14:32:23.008129       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:32:23.008606       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:32:23.053605       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:32:23.079116       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:32:23.181216       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0929 14:32:26.047018       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:17.262464       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 14:33:17.262495       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 14:33:17.262515       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 14:33:17.262539       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:17.262752       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 14:33:17.262767       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [baae0522a93f] <==
	I0929 14:33:48.184595       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:33:48.214437       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:48.214488       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:33:48.226478       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 14:33:48.226607       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0929 14:33:48.262146       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:33:48.262521       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 14:33:48.262565       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:33:48.262604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0929 14:33:48.262642       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0929 14:33:48.262703       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 14:33:48.262798       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:33:48.262838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:33:48.262873       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:33:48.262904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:33:48.274659       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0929 14:33:48.274817       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0929 14:33:48.274867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 14:33:48.274919       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:33:48.274953       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:33:48.285793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0929 14:33:48.285891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0929 14:33:48.286015       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:33:48.286086       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0929 14:33:49.715037       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 29 14:50:51 embed-certs-641794 kubelet[1405]: E0929 14:50:51.837458    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:50:53 embed-certs-641794 kubelet[1405]: E0929 14:50:53.837801    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:50:59 embed-certs-641794 kubelet[1405]: E0929 14:50:59.849257    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:51:04 embed-certs-641794 kubelet[1405]: E0929 14:51:04.834080    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:51:07 embed-certs-641794 kubelet[1405]: E0929 14:51:07.835870    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:51:12 embed-certs-641794 kubelet[1405]: E0929 14:51:12.837691    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:51:17 embed-certs-641794 kubelet[1405]: E0929 14:51:17.835152    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:51:20 embed-certs-641794 kubelet[1405]: E0929 14:51:20.834282    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:51:27 embed-certs-641794 kubelet[1405]: E0929 14:51:27.837186    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:51:30 embed-certs-641794 kubelet[1405]: E0929 14:51:30.834496    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:51:31 embed-certs-641794 kubelet[1405]: E0929 14:51:31.835949    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:51:39 embed-certs-641794 kubelet[1405]: E0929 14:51:39.845469    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:51:44 embed-certs-641794 kubelet[1405]: E0929 14:51:44.834870    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:51:45 embed-certs-641794 kubelet[1405]: E0929 14:51:45.835718    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:51:52 embed-certs-641794 kubelet[1405]: E0929 14:51:52.834328    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:51:57 embed-certs-641794 kubelet[1405]: E0929 14:51:57.835018    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:51:57 embed-certs-641794 kubelet[1405]: E0929 14:51:57.836278    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:52:06 embed-certs-641794 kubelet[1405]: E0929 14:52:06.834333    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:52:08 embed-certs-641794 kubelet[1405]: E0929 14:52:08.835969    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:52:08 embed-certs-641794 kubelet[1405]: E0929 14:52:08.836334    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:52:19 embed-certs-641794 kubelet[1405]: E0929 14:52:19.835882    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	Sep 29 14:52:20 embed-certs-641794 kubelet[1405]: E0929 14:52:20.834225    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.85.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-rns62" podUID="05688502-f3cf-4c29-93bb-f0c51bdb4c0b"
	Sep 29 14:52:21 embed-certs-641794 kubelet[1405]: E0929 14:52:21.847091    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:52:32 embed-certs-641794 kubelet[1405]: E0929 14:52:32.836040    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-stm84" podUID="8cb05176-b4b0-46d2-b097-9ccde558faef"
	Sep 29 14:52:32 embed-certs-641794 kubelet[1405]: E0929 14:52:32.836589    1405 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-47mqf" podUID="da179c3b-5a5b-452e-9da4-57b22177fba3"
	
	
	==> storage-provisioner [00b043c910b1] <==
	I0929 14:33:51.094730       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:34:21.101217       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a5e3d225d263] <==
	W0929 14:52:08.936654       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:10.939972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:10.947001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:12.950387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:12.954977       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:14.958614       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:14.965230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:16.967854       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:16.972473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:18.975286       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:18.980191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:20.983278       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:20.990184       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:22.993844       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:23.000923       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:25.009917       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:25.016147       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:27.019601       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:27.027154       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:29.030564       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:29.037627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:31.041283       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:31.050533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:33.054191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:52:33.063294       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-641794 -n embed-certs-641794
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-641794 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-641794 describe pod metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-641794 describe pod metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf: exit status 1 (92.937133ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-rns62" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-stm84" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-47mqf" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-641794 describe pod metrics-server-746fcd58dc-rns62 dashboard-metrics-scraper-6ffb444bf9-stm84 kubernetes-dashboard-855c9754f9-47mqf: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (543.04s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tdxbq" [4e2ddb81-1cba-47a1-897a-4f8a7912d3f3] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E0929 14:45:38.296294 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:45:52.916111 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:45:53.318945 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:46:53.256713 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:47:00.340944 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:47:03.456030 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:47:05.425736 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:47:47.162523 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:47:50.246420 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:48:04.384487 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:48:47.930259 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:48:55.981239 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:48:59.883473 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:49:02.358810 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:49:10.225701 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:49:56.324085 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:50:01.311447 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:50:03.684165 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:50:10.991435 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:50:20.566719 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:50:21.376255 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:50:38.296058 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:50:52.916215 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:51:53.256670 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:52:00.343981 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:52:03.455797 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-09-29 14:54:37.938420534 +0000 UTC m=+6771.203633895
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 describe po kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) kubectl --context default-k8s-diff-port-186820 describe po kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard:
Name:             kubernetes-dashboard-855c9754f9-tdxbq
Namespace:        kubernetes-dashboard
Priority:         0
Service Account:  kubernetes-dashboard
Node:             default-k8s-diff-port-186820/192.168.76.2
Start Time:       Mon, 29 Sep 2025 14:36:02 +0000
Labels:           gcp-auth-skip-secret=true
k8s-app=kubernetes-dashboard
pod-template-hash=855c9754f9
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/kubernetes-dashboard-855c9754f9
Containers:
kubernetes-dashboard:
Container ID:  
Image:         docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
Image ID:      
Port:          9090/TCP
Host Port:     0/TCP
Args:
--namespace=kubernetes-dashboard
--enable-skip-login
--disable-settings-authorizer
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Liveness:       http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment:    <none>
Mounts:
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-c9ghl (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
tmp-volume:
Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:     
SizeLimit:  <unset>
kube-api-access-c9ghl:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              kubernetes.io/os=linux
Tolerations:                 node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  18m                   default-scheduler  Successfully assigned kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq to default-k8s-diff-port-186820
Normal   Pulling    15m (x5 over 18m)     kubelet            Pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     15m (x5 over 18m)     kubelet            Failed to pull image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93": Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit
Warning  Failed     15m (x5 over 18m)     kubelet            Error: ErrImagePull
Normal   BackOff    3m31s (x65 over 18m)  kubelet            Back-off pulling image "docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
Warning  Failed     3m31s (x65 over 18m)  kubelet            Error: ImagePullBackOff
start_stop_delete_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 logs kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard
start_stop_delete_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-186820 logs kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard: exit status 1 (108.422085ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "kubernetes-dashboard" in pod "kubernetes-dashboard-855c9754f9-tdxbq" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
start_stop_delete_test.go:285: kubectl --context default-k8s-diff-port-186820 logs kubernetes-dashboard-855c9754f9-tdxbq -n kubernetes-dashboard: exit status 1
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-186820
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-186820:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d",
	        "Created": "2025-09-29T14:34:38.00395341Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1596191,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-29T14:35:42.727283628Z",
	            "FinishedAt": "2025-09-29T14:35:41.897203495Z"
	        },
	        "Image": "sha256:3d6f74760dfc17060da5abc5d463d3d45b4ceea05955c9cc42b3ec56cb38cc48",
	        "ResolvConfPath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/hostname",
	        "HostsPath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/hosts",
	        "LogPath": "/var/lib/docker/containers/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d/53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d-json.log",
	        "Name": "/default-k8s-diff-port-186820",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-186820:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "default-k8s-diff-port-186820",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "53a9ac8f8b5fca1807c42bb121c016f5e119a7599a5d50f095620f614844f60d",
	                "LowerDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61-init/diff:/var/lib/docker/overlay2/131eb13c105941e1413431255a86d3f8e028faf09e8615e9e5b8dbe91366a7f8/diff",
	                "MergedDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61/merged",
	                "UpperDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61/diff",
	                "WorkDir": "/var/lib/docker/overlay2/3615b22570de9378170039820eb0e505714a2d82f7118b9c9b22da5ad0f38b61/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-186820",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-186820/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-186820",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-186820",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-186820",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "783abf3c4db843da08eb3592c5440792ba1a7ca1ddfc77f6acf07cb7d036e206",
	            "SandboxKey": "/var/run/docker/netns/783abf3c4db8",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34321"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34322"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34325"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34323"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34324"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-186820": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "96:db:0e:d2:37:ab",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "07a99473690a202625d605e5721cbda950adaf6af5f172bb7ac62453a5d36cb4",
	                    "EndpointID": "22bd89631aa1f5b98ca530e1e2e5eca83158fbece80d6a04776953df6ca474b7",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-186820",
	                        "53a9ac8f8b5f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-186820 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-arm64 -p default-k8s-diff-port-186820 logs -n 25: (1.300532537s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬────────────
─────────┐
	│ COMMAND │                                                                                                                      ARGS                                                                                                                       │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼────────────
─────────┤
	│ delete  │ -p disable-driver-mounts-627946                                                                                                                                                                                                                 │ disable-driver-mounts-627946 │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable metrics-server -p embed-certs-641794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p embed-certs-641794 --alsologtostderr -v=3                                                                                                                                                                                                    │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ addons  │ enable dashboard -p embed-certs-641794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                   │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ start   │ -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                                        │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable metrics-server -p newest-cni-093064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                         │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:33 UTC │
	│ stop    │ -p newest-cni-093064 --alsologtostderr -v=3                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:33 UTC │ 29 Sep 25 14:34 UTC │
	│ addons  │ enable dashboard -p newest-cni-093064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                    │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0 │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ image   │ newest-cni-093064 image list --format=json                                                                                                                                                                                                      │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ pause   │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ unpause │ -p newest-cni-093064 --alsologtostderr -v=1                                                                                                                                                                                                     │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ delete  │ -p newest-cni-093064                                                                                                                                                                                                                            │ newest-cni-093064            │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:34 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:34 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-186820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                              │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ stop    │ -p default-k8s-diff-port-186820 --alsologtostderr -v=3                                                                                                                                                                                          │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-186820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                         │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:35 UTC │
	│ start   │ -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-186820 │ jenkins │ v1.37.0 │ 29 Sep 25 14:35 UTC │ 29 Sep 25 14:36 UTC │
	│ image   │ embed-certs-641794 image list --format=json                                                                                                                                                                                                     │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:52 UTC │ 29 Sep 25 14:52 UTC │
	│ pause   │ -p embed-certs-641794 --alsologtostderr -v=1                                                                                                                                                                                                    │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:52 UTC │ 29 Sep 25 14:52 UTC │
	│ unpause │ -p embed-certs-641794 --alsologtostderr -v=1                                                                                                                                                                                                    │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:52 UTC │ 29 Sep 25 14:52 UTC │
	│ delete  │ -p embed-certs-641794                                                                                                                                                                                                                           │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:52 UTC │ 29 Sep 25 14:52 UTC │
	│ delete  │ -p embed-certs-641794                                                                                                                                                                                                                           │ embed-certs-641794           │ jenkins │ v1.37.0 │ 29 Sep 25 14:52 UTC │ 29 Sep 25 14:52 UTC │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴────────────
─────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 14:35:42
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 14:35:42.456122 1596062 out.go:360] Setting OutFile to fd 1 ...
	I0929 14:35:42.456362 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456395 1596062 out.go:374] Setting ErrFile to fd 2...
	I0929 14:35:42.456415 1596062 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 14:35:42.456738 1596062 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 14:35:42.457163 1596062 out.go:368] Setting JSON to false
	I0929 14:35:42.458288 1596062 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":22695,"bootTime":1759133848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 14:35:42.458402 1596062 start.go:140] virtualization:  
	I0929 14:35:42.462007 1596062 out.go:179] * [default-k8s-diff-port-186820] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 14:35:42.465793 1596062 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 14:35:42.465926 1596062 notify.go:220] Checking for updates...
	I0929 14:35:42.471729 1596062 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 14:35:42.474683 1596062 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:42.477543 1596062 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 14:35:42.480431 1596062 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 14:35:42.483237 1596062 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 14:35:42.486711 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:42.487301 1596062 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 14:35:42.514877 1596062 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 14:35:42.515008 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.572860 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.562452461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.572973 1596062 docker.go:318] overlay module found
	I0929 14:35:42.576085 1596062 out.go:179] * Using the docker driver based on existing profile
	I0929 14:35:42.578939 1596062 start.go:304] selected driver: docker
	I0929 14:35:42.578961 1596062 start.go:924] validating driver "docker" against &{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion
:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.579120 1596062 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 14:35:42.579853 1596062 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 14:35:42.635895 1596062 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:42 OomKillDisable:true NGoroutines:52 SystemTime:2025-09-29 14:35:42.626575461 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 14:35:42.636238 1596062 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:35:42.636278 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:42.636347 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:42.636386 1596062 start.go:348] cluster config:
	{Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:42.641605 1596062 out.go:179] * Starting "default-k8s-diff-port-186820" primary control-plane node in "default-k8s-diff-port-186820" cluster
	I0929 14:35:42.645130 1596062 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 14:35:42.648466 1596062 out.go:179] * Pulling base image v0.0.48 ...
	I0929 14:35:42.651441 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:42.651462 1596062 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 14:35:42.651506 1596062 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 14:35:42.651523 1596062 cache.go:58] Caching tarball of preloaded images
	I0929 14:35:42.651603 1596062 preload.go:172] Found /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 in cache, skipping download
	I0929 14:35:42.651613 1596062 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on docker
	I0929 14:35:42.651737 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:42.671234 1596062 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0929 14:35:42.671260 1596062 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0929 14:35:42.671281 1596062 cache.go:232] Successfully downloaded all kic artifacts
	I0929 14:35:42.671312 1596062 start.go:360] acquireMachinesLock for default-k8s-diff-port-186820: {Name:mk14ee05a72e1bc87d0193bcc4d30163df297691 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0929 14:35:42.671385 1596062 start.go:364] duration metric: took 48.354µs to acquireMachinesLock for "default-k8s-diff-port-186820"
	I0929 14:35:42.671408 1596062 start.go:96] Skipping create...Using existing machine configuration
	I0929 14:35:42.671416 1596062 fix.go:54] fixHost starting: 
	I0929 14:35:42.671679 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:42.688259 1596062 fix.go:112] recreateIfNeeded on default-k8s-diff-port-186820: state=Stopped err=<nil>
	W0929 14:35:42.688293 1596062 fix.go:138] unexpected machine state, will restart: <nil>
	I0929 14:35:42.691565 1596062 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-186820" ...
	I0929 14:35:42.691663 1596062 cli_runner.go:164] Run: docker start default-k8s-diff-port-186820
	I0929 14:35:42.980213 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:43.009181 1596062 kic.go:430] container "default-k8s-diff-port-186820" state is running.
	I0929 14:35:43.009618 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:43.038170 1596062 profile.go:143] Saving config to /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/config.json ...
	I0929 14:35:43.038413 1596062 machine.go:93] provisionDockerMachine start ...
	I0929 14:35:43.038482 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:43.061723 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:43.062111 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:43.062127 1596062 main.go:141] libmachine: About to run SSH command:
	hostname
	I0929 14:35:43.062747 1596062 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:41000->127.0.0.1:34321: read: connection reset by peer
	I0929 14:35:46.204046 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.204073 1596062 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-186820"
	I0929 14:35:46.204141 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.222056 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.222389 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.222406 1596062 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-186820 && echo "default-k8s-diff-port-186820" | sudo tee /etc/hostname
	I0929 14:35:46.377247 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-186820
	
	I0929 14:35:46.377348 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.397114 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.397485 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.397509 1596062 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-186820' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-186820/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-186820' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0929 14:35:46.537135 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:46.537160 1596062 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21652-1125775/.minikube CaCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21652-1125775/.minikube}
	I0929 14:35:46.537236 1596062 ubuntu.go:190] setting up certificates
	I0929 14:35:46.537245 1596062 provision.go:84] configureAuth start
	I0929 14:35:46.537316 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:46.558841 1596062 provision.go:143] copyHostCerts
	I0929 14:35:46.558910 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem, removing ...
	I0929 14:35:46.558934 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem
	I0929 14:35:46.559026 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.pem (1078 bytes)
	I0929 14:35:46.559142 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem, removing ...
	I0929 14:35:46.559154 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem
	I0929 14:35:46.559183 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/cert.pem (1123 bytes)
	I0929 14:35:46.559251 1596062 exec_runner.go:144] found /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem, removing ...
	I0929 14:35:46.559260 1596062 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem
	I0929 14:35:46.559289 1596062 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21652-1125775/.minikube/key.pem (1671 bytes)
	I0929 14:35:46.559350 1596062 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-186820 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-186820 localhost minikube]
	I0929 14:35:46.733893 1596062 provision.go:177] copyRemoteCerts
	I0929 14:35:46.733959 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0929 14:35:46.733998 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.755356 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:46.858489 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0929 14:35:46.883909 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0929 14:35:46.910465 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0929 14:35:46.942412 1596062 provision.go:87] duration metric: took 405.141346ms to configureAuth
	I0929 14:35:46.942438 1596062 ubuntu.go:206] setting minikube options for container-runtime
	I0929 14:35:46.942640 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:46.942699 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:46.959513 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:46.959825 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:46.959842 1596062 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0929 14:35:47.108999 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0929 14:35:47.109020 1596062 ubuntu.go:71] root file system type: overlay
	I0929 14:35:47.109131 1596062 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0929 14:35:47.109201 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.126915 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.127240 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.127365 1596062 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0929 14:35:47.281272 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I0929 14:35:47.281364 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.299262 1596062 main.go:141] libmachine: Using SSH client type: native
	I0929 14:35:47.299576 1596062 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x3ef1e0] 0x3f19a0 <nil>  [] 0s} 127.0.0.1 34321 <nil> <nil>}
	I0929 14:35:47.299606 1596062 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0929 14:35:47.450591 1596062 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0929 14:35:47.450619 1596062 machine.go:96] duration metric: took 4.41218926s to provisionDockerMachine
	I0929 14:35:47.450630 1596062 start.go:293] postStartSetup for "default-k8s-diff-port-186820" (driver="docker")
	I0929 14:35:47.450641 1596062 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0929 14:35:47.450716 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0929 14:35:47.450765 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.470252 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.570022 1596062 ssh_runner.go:195] Run: cat /etc/os-release
	I0929 14:35:47.573521 1596062 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0929 14:35:47.573556 1596062 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0929 14:35:47.573567 1596062 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0929 14:35:47.573574 1596062 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0929 14:35:47.573585 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/addons for local assets ...
	I0929 14:35:47.573643 1596062 filesync.go:126] Scanning /home/jenkins/minikube-integration/21652-1125775/.minikube/files for local assets ...
	I0929 14:35:47.573731 1596062 filesync.go:149] local asset: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem -> 11276402.pem in /etc/ssl/certs
	I0929 14:35:47.573850 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0929 14:35:47.582484 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:47.607719 1596062 start.go:296] duration metric: took 157.074022ms for postStartSetup
	I0929 14:35:47.607821 1596062 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 14:35:47.607869 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.624930 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.721416 1596062 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0929 14:35:47.725935 1596062 fix.go:56] duration metric: took 5.054511148s for fixHost
	I0929 14:35:47.725957 1596062 start.go:83] releasing machines lock for "default-k8s-diff-port-186820", held for 5.054560232s
	I0929 14:35:47.726022 1596062 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-186820
	I0929 14:35:47.743658 1596062 ssh_runner.go:195] Run: cat /version.json
	I0929 14:35:47.743708 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.743985 1596062 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0929 14:35:47.744046 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:47.767655 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.776135 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:47.868074 1596062 ssh_runner.go:195] Run: systemctl --version
	I0929 14:35:48.003111 1596062 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0929 14:35:48.010051 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "grep -q loopback {} && ( grep -q name {} || sudo sed -i '/"type": "loopback"/i \ \ \ \ "name": "loopback",' {} ) && sudo sed -i 's|"cniVersion": ".*"|"cniVersion": "1.0.0"|g' {}" ;
	I0929 14:35:48.037046 1596062 cni.go:230] loopback cni configuration patched: "/etc/cni/net.d/*loopback.conf*" found
	I0929 14:35:48.037127 1596062 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0929 14:35:48.046790 1596062 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0929 14:35:48.046821 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.046855 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.046959 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.064298 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I0929 14:35:48.077373 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0929 14:35:48.087939 1596062 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.088011 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0929 14:35:48.099214 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.109800 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0929 14:35:48.119860 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0929 14:35:48.129709 1596062 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0929 14:35:48.140034 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0929 14:35:48.151023 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0929 14:35:48.162212 1596062 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0929 14:35:48.173065 1596062 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0929 14:35:48.182304 1596062 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0929 14:35:48.191122 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.275156 1596062 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0929 14:35:48.388383 1596062 start.go:495] detecting cgroup driver to use...
	I0929 14:35:48.388435 1596062 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0929 14:35:48.388487 1596062 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0929 14:35:48.403898 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.417945 1596062 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0929 14:35:48.450429 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0929 14:35:48.462890 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0929 14:35:48.476336 1596062 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0929 14:35:48.497267 1596062 ssh_runner.go:195] Run: which cri-dockerd
	I0929 14:35:48.501572 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0929 14:35:48.513810 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I0929 14:35:48.548394 1596062 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0929 14:35:48.651762 1596062 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0929 14:35:48.744803 1596062 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I0929 14:35:48.744903 1596062 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0929 14:35:48.765355 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I0929 14:35:48.778732 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:48.873398 1596062 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0929 14:35:49.382274 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0929 14:35:49.394500 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0929 14:35:49.406617 1596062 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0929 14:35:49.420787 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.432705 1596062 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0929 14:35:49.525907 1596062 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0929 14:35:49.612769 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.715560 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0929 14:35:49.731642 1596062 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I0929 14:35:49.743392 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:49.840499 1596062 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0929 14:35:49.933414 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0929 14:35:49.952842 1596062 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0929 14:35:49.952912 1596062 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0929 14:35:49.956643 1596062 start.go:563] Will wait 60s for crictl version
	I0929 14:35:49.956708 1596062 ssh_runner.go:195] Run: which crictl
	I0929 14:35:49.960634 1596062 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0929 14:35:50.005514 1596062 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  28.4.0
	RuntimeApiVersion:  v1
	I0929 14:35:50.005607 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.035266 1596062 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0929 14:35:50.064977 1596062 out.go:252] * Preparing Kubernetes v1.34.0 on Docker 28.4.0 ...
	I0929 14:35:50.065096 1596062 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-186820 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0929 14:35:50.085518 1596062 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0929 14:35:50.090438 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.104259 1596062 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGI
D:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0929 14:35:50.104391 1596062 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 14:35:50.104452 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.126383 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.126409 1596062 docker.go:621] Images already preloaded, skipping extraction
	I0929 14:35:50.126472 1596062 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0929 14:35:50.146276 1596062 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.34.0
	registry.k8s.io/kube-controller-manager:v1.34.0
	registry.k8s.io/kube-scheduler:v1.34.0
	registry.k8s.io/kube-proxy:v1.34.0
	registry.k8s.io/etcd:3.6.4-0
	registry.k8s.io/pause:3.10.1
	registry.k8s.io/coredns/coredns:v1.12.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28.4-glibc
	
	-- /stdout --
	I0929 14:35:50.146318 1596062 cache_images.go:85] Images are preloaded, skipping loading
	I0929 14:35:50.146329 1596062 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 docker true true} ...
	I0929 14:35:50.146441 1596062 kubeadm.go:938] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-186820 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0929 14:35:50.146513 1596062 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0929 14:35:50.200396 1596062 cni.go:84] Creating CNI manager for ""
	I0929 14:35:50.200426 1596062 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 14:35:50.200440 1596062 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0929 14:35:50.200460 1596062 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-186820 NodeName:default-k8s-diff-port-186820 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/c
a.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0929 14:35:50.200650 1596062 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "default-k8s-diff-port-186820"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0929 14:35:50.200727 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0929 14:35:50.210044 1596062 binaries.go:44] Found k8s binaries, skipping transfer
	I0929 14:35:50.210118 1596062 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0929 14:35:50.219378 1596062 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0929 14:35:50.237028 1596062 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0929 14:35:50.255641 1596062 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2229 bytes)
	I0929 14:35:50.274465 1596062 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0929 14:35:50.278275 1596062 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0929 14:35:50.289351 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:50.378347 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:50.393916 1596062 certs.go:68] Setting up /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820 for IP: 192.168.76.2
	I0929 14:35:50.393942 1596062 certs.go:194] generating shared ca certs ...
	I0929 14:35:50.393959 1596062 certs.go:226] acquiring lock for ca certs: {Name:mk2ca206c678438cc443e63fe0260ecc893c1d98 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:50.394101 1596062 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key
	I0929 14:35:50.394152 1596062 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key
	I0929 14:35:50.394164 1596062 certs.go:256] generating profile certs ...
	I0929 14:35:50.394266 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/client.key
	I0929 14:35:50.394344 1596062 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key.3abc893e
	I0929 14:35:50.394410 1596062 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key
	I0929 14:35:50.394524 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem (1338 bytes)
	W0929 14:35:50.394563 1596062 certs.go:480] ignoring /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640_empty.pem, impossibly tiny 0 bytes
	I0929 14:35:50.394576 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca-key.pem (1675 bytes)
	I0929 14:35:50.394602 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/ca.pem (1078 bytes)
	I0929 14:35:50.394627 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/cert.pem (1123 bytes)
	I0929 14:35:50.394652 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/key.pem (1671 bytes)
	I0929 14:35:50.394699 1596062 certs.go:484] found cert: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem (1708 bytes)
	I0929 14:35:50.395324 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0929 14:35:50.425482 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0929 14:35:50.458821 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0929 14:35:50.492420 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0929 14:35:50.551343 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0929 14:35:50.605319 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0929 14:35:50.639423 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0929 14:35:50.678207 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/default-k8s-diff-port-186820/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0929 14:35:50.718215 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0929 14:35:50.747191 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/certs/1127640.pem --> /usr/share/ca-certificates/1127640.pem (1338 bytes)
	I0929 14:35:50.779504 1596062 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/ssl/certs/11276402.pem --> /usr/share/ca-certificates/11276402.pem (1708 bytes)
	I0929 14:35:50.809480 1596062 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0929 14:35:50.830273 1596062 ssh_runner.go:195] Run: openssl version
	I0929 14:35:50.836472 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1127640.pem && ln -fs /usr/share/ca-certificates/1127640.pem /etc/ssl/certs/1127640.pem"
	I0929 14:35:50.848203 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.851953 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 29 13:09 /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.852017 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1127640.pem
	I0929 14:35:50.859388 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1127640.pem /etc/ssl/certs/51391683.0"
	I0929 14:35:50.868867 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/11276402.pem && ln -fs /usr/share/ca-certificates/11276402.pem /etc/ssl/certs/11276402.pem"
	I0929 14:35:50.878588 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882188 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 29 13:09 /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.882261 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/11276402.pem
	I0929 14:35:50.890114 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/11276402.pem /etc/ssl/certs/3ec20f2e.0"
	I0929 14:35:50.899476 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0929 14:35:50.909249 1596062 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913394 1596062 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 29 13:02 /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.913486 1596062 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0929 14:35:50.921135 1596062 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0929 14:35:50.930563 1596062 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0929 14:35:50.934410 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0929 14:35:50.941795 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0929 14:35:50.950427 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0929 14:35:50.960816 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0929 14:35:50.970602 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0929 14:35:50.977819 1596062 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0929 14:35:50.985284 1596062 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-186820 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-186820 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:d
ocker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 14:35:50.985429 1596062 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0929 14:35:51.006801 1596062 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0929 14:35:51.025256 1596062 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0929 14:35:51.025334 1596062 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0929 14:35:51.025424 1596062 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0929 14:35:51.041400 1596062 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0929 14:35:51.042316 1596062 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-186820" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.042910 1596062 kubeconfig.go:62] /home/jenkins/minikube-integration/21652-1125775/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-186820" cluster setting kubeconfig missing "default-k8s-diff-port-186820" context setting]
	I0929 14:35:51.043713 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.045723 1596062 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0929 14:35:51.061546 1596062 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0929 14:35:51.061580 1596062 kubeadm.go:593] duration metric: took 36.227514ms to restartPrimaryControlPlane
	I0929 14:35:51.061589 1596062 kubeadm.go:394] duration metric: took 76.316349ms to StartCluster
	I0929 14:35:51.061606 1596062 settings.go:142] acquiring lock: {Name:mk249a9fcafe0b1d8a711271fd58963fceaa93e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.061666 1596062 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 14:35:51.063237 1596062 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21652-1125775/kubeconfig: {Name:mk597cf1ee15868b03242d28b30b65f8e0e5d009 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0929 14:35:51.063476 1596062 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0929 14:35:51.063781 1596062 config.go:182] Loaded profile config "default-k8s-diff-port-186820": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 14:35:51.063837 1596062 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0929 14:35:51.063907 1596062 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.063922 1596062 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.063934 1596062 addons.go:247] addon storage-provisioner should already be in state true
	I0929 14:35:51.063956 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.064489 1596062 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.064568 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.064581 1596062 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-186820"
	I0929 14:35:51.064928 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.067934 1596062 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.067967 1596062 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.067974 1596062 addons.go:247] addon metrics-server should already be in state true
	I0929 14:35:51.068006 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.068449 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.069089 1596062 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-186820"
	I0929 14:35:51.069110 1596062 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.069117 1596062 addons.go:247] addon dashboard should already be in state true
	I0929 14:35:51.069143 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.069590 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.076810 1596062 out.go:179] * Verifying Kubernetes components...
	I0929 14:35:51.091555 1596062 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0929 14:35:51.118136 1596062 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0929 14:35:51.125122 1596062 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.125149 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0929 14:35:51.125225 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.164326 1596062 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-186820"
	W0929 14:35:51.164353 1596062 addons.go:247] addon default-storageclass should already be in state true
	I0929 14:35:51.164390 1596062 host.go:66] Checking if "default-k8s-diff-port-186820" exists ...
	I0929 14:35:51.170550 1596062 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-186820 --format={{.State.Status}}
	I0929 14:35:51.184841 1596062 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0929 14:35:51.190867 1596062 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199347 1596062 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0929 14:35:51.199401 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0929 14:35:51.205983 1596062 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0929 14:35:51.206084 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.202823 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.213345 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0929 14:35:51.213391 1596062 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0929 14:35:51.213484 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.230915 1596062 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.230936 1596062 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0929 14:35:51.230996 1596062 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-186820
	I0929 14:35:51.269958 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.296608 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.306953 1596062 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34321 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/default-k8s-diff-port-186820/id_rsa Username:docker}
	I0929 14:35:51.321614 1596062 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0929 14:35:51.387857 1596062 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:51.488310 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.584676 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0929 14:35:51.584747 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0929 14:35:51.636648 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:51.656953 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0929 14:35:51.656977 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0929 14:35:51.769528 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0929 14:35:51.769551 1596062 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	W0929 14:35:51.776704 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.776767 1596062 retry.go:31] will retry after 176.889773ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:51.799383 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0929 14:35:51.799417 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0929 14:35:51.919355 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0929 14:35:51.919384 1596062 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0929 14:35:51.953840 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:51.958674 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0929 14:35:51.958698 1596062 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0929 14:35:51.997497 1596062 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:51.997523 1596062 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0929 14:35:52.312165 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:52.398850 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0929 14:35:52.398879 1596062 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W0929 14:35:52.469654 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469690 1596062 retry.go:31] will retry after 160.704677ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0929 14:35:52.469763 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.469777 1596062 retry.go:31] will retry after 381.313638ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.566150 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0929 14:35:52.566178 1596062 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0929 14:35:52.631374 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0929 14:35:52.752298 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0929 14:35:52.752376 1596062 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W0929 14:35:52.812288 1596062 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.812366 1596062 retry.go:31] will retry after 303.64621ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0929 14:35:52.851712 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0929 14:35:52.884643 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0929 14:35:52.884713 1596062 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0929 14:35:53.087320 1596062 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:53.087401 1596062 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0929 14:35:53.116319 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0929 14:35:53.151041 1596062 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0929 14:35:56.942553 1596062 node_ready.go:49] node "default-k8s-diff-port-186820" is "Ready"
	I0929 14:35:56.942583 1596062 node_ready.go:38] duration metric: took 5.554681325s for node "default-k8s-diff-port-186820" to be "Ready" ...
	I0929 14:35:56.942602 1596062 api_server.go:52] waiting for apiserver process to appear ...
	I0929 14:35:56.942665 1596062 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 14:35:57.186445 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (4.55502845s)
	I0929 14:35:59.647559 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (6.795763438s)
	I0929 14:35:59.694900 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (6.578497303s)
	I0929 14:35:59.694937 1596062 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-186820"
	I0929 14:35:59.695034 1596062 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (6.543910818s)
	I0929 14:35:59.695216 1596062 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.752538922s)
	I0929 14:35:59.695237 1596062 api_server.go:72] duration metric: took 8.631722688s to wait for apiserver process to appear ...
	I0929 14:35:59.695243 1596062 api_server.go:88] waiting for apiserver healthz status ...
	I0929 14:35:59.695260 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:35:59.698283 1596062 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-186820 addons enable metrics-server
	
	I0929 14:35:59.701228 1596062 out.go:179] * Enabled addons: default-storageclass, storage-provisioner, metrics-server, dashboard
	I0929 14:35:59.704363 1596062 addons.go:514] duration metric: took 8.640511326s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server dashboard]
	I0929 14:35:59.704573 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0929 14:35:59.704591 1596062 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0929 14:36:00.200300 1596062 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0929 14:36:00.235965 1596062 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0929 14:36:00.246294 1596062 api_server.go:141] control plane version: v1.34.0
	I0929 14:36:00.246322 1596062 api_server.go:131] duration metric: took 551.072592ms to wait for apiserver health ...
	I0929 14:36:00.246333 1596062 system_pods.go:43] waiting for kube-system pods to appear ...
	I0929 14:36:00.258786 1596062 system_pods.go:59] 8 kube-system pods found
	I0929 14:36:00.258905 1596062 system_pods.go:61] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.258925 1596062 system_pods.go:61] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.258935 1596062 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.258944 1596062 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.259016 1596062 system_pods.go:61] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.259074 1596062 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.259092 1596062 system_pods.go:61] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.259101 1596062 system_pods.go:61] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.259111 1596062 system_pods.go:74] duration metric: took 12.770585ms to wait for pod list to return data ...
	I0929 14:36:00.259168 1596062 default_sa.go:34] waiting for default service account to be created ...
	I0929 14:36:00.267463 1596062 default_sa.go:45] found service account: "default"
	I0929 14:36:00.267489 1596062 default_sa.go:55] duration metric: took 8.313947ms for default service account to be created ...
	I0929 14:36:00.267500 1596062 system_pods.go:116] waiting for k8s-apps to be running ...
	I0929 14:36:00.275897 1596062 system_pods.go:86] 8 kube-system pods found
	I0929 14:36:00.276012 1596062 system_pods.go:89] "coredns-66bc5c9577-wb8jw" [c72f66ff-a464-43c6-a0e4-82da1ba66780] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0929 14:36:00.276046 1596062 system_pods.go:89] "etcd-default-k8s-diff-port-186820" [a89a2e2c-7628-44d9-a0ff-f7a51680fa48] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0929 14:36:00.276089 1596062 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-186820" [f6270c6c-df3a-461a-94d1-b1c494e85f0f] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0929 14:36:00.276122 1596062 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-186820" [e5cd4b48-40ea-44c9-9389-804a2a149bb9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0929 14:36:00.276164 1596062 system_pods.go:89] "kube-proxy-xbpqv" [0cb52a5d-89e9-4ed8-9ff3-93c7f80b94a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0929 14:36:00.276193 1596062 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-186820" [05635437-5cc5-45f7-aec0-5c447e7679a3] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0929 14:36:00.276220 1596062 system_pods.go:89] "metrics-server-746fcd58dc-nbbb9" [43fcdf52-1359-4a10-8f64-c721fa11c8c2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0929 14:36:00.276263 1596062 system_pods.go:89] "storage-provisioner" [d20cd17d-3b6e-4c2a-9d32-f047094f77a1] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0929 14:36:00.276302 1596062 system_pods.go:126] duration metric: took 8.789614ms to wait for k8s-apps to be running ...
	I0929 14:36:00.276347 1596062 system_svc.go:44] waiting for kubelet service to be running ....
	I0929 14:36:00.276463 1596062 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 14:36:00.322130 1596062 system_svc.go:56] duration metric: took 45.77635ms WaitForService to wait for kubelet
	I0929 14:36:00.322171 1596062 kubeadm.go:578] duration metric: took 9.258650816s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0929 14:36:00.322195 1596062 node_conditions.go:102] verifying NodePressure condition ...
	I0929 14:36:00.330255 1596062 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I0929 14:36:00.330363 1596062 node_conditions.go:123] node cpu capacity is 2
	I0929 14:36:00.330378 1596062 node_conditions.go:105] duration metric: took 8.17742ms to run NodePressure ...
	I0929 14:36:00.330394 1596062 start.go:241] waiting for startup goroutines ...
	I0929 14:36:00.330402 1596062 start.go:246] waiting for cluster config update ...
	I0929 14:36:00.330414 1596062 start.go:255] writing updated cluster config ...
	I0929 14:36:00.330883 1596062 ssh_runner.go:195] Run: rm -f paused
	I0929 14:36:00.336791 1596062 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:00.352867 1596062 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	W0929 14:36:02.362537 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:04.859542 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:06.860829 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:09.359186 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:11.859196 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:14.358754 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:16.859093 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:19.358587 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:21.362560 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:23.858978 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:25.863368 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:27.868276 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:30.358700 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	W0929 14:36:32.358763 1596062 pod_ready.go:104] pod "coredns-66bc5c9577-wb8jw" is not "Ready", error: <nil>
	I0929 14:36:32.858935 1596062 pod_ready.go:94] pod "coredns-66bc5c9577-wb8jw" is "Ready"
	I0929 14:36:32.858962 1596062 pod_ready.go:86] duration metric: took 32.506066188s for pod "coredns-66bc5c9577-wb8jw" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.862337 1596062 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.868713 1596062 pod_ready.go:94] pod "etcd-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.868746 1596062 pod_ready.go:86] duration metric: took 6.378054ms for pod "etcd-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.871570 1596062 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.876378 1596062 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:32.876410 1596062 pod_ready.go:86] duration metric: took 4.809833ms for pod "kube-apiserver-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:32.879056 1596062 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.057602 1596062 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:33.057631 1596062 pod_ready.go:86] duration metric: took 178.552151ms for pod "kube-controller-manager-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.256851 1596062 pod_ready.go:83] waiting for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.657271 1596062 pod_ready.go:94] pod "kube-proxy-xbpqv" is "Ready"
	I0929 14:36:33.657301 1596062 pod_ready.go:86] duration metric: took 400.41966ms for pod "kube-proxy-xbpqv" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:33.857548 1596062 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256475 1596062 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-186820" is "Ready"
	I0929 14:36:34.256548 1596062 pod_ready.go:86] duration metric: took 398.968386ms for pod "kube-scheduler-default-k8s-diff-port-186820" in "kube-system" namespace to be "Ready" or be gone ...
	I0929 14:36:34.256562 1596062 pod_ready.go:40] duration metric: took 33.919672235s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0929 14:36:34.315168 1596062 start.go:623] kubectl: 1.33.2, cluster: 1.34.0 (minor skew: 1)
	I0929 14:36:34.318274 1596062 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-186820" cluster and "default" namespace by default
	
	
	==> Docker <==
	Sep 29 14:41:50 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:50.058766964Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:41:50 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:50.058893087Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:41:50 default-k8s-diff-port-186820 cri-dockerd[1213]: time="2025-09-29T14:41:50Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:41:53 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:53.650071444Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:41:53 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:41:53.741755849Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:46:44 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:46:44.622711576Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:46:44 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:46:44.622757394Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:46:44 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:46:44.625886797Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:46:44 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:46:44.625937276Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:46:59 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:46:59.654857491Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:46:59 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:46:59.745909741Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	Sep 29 14:47:02 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:47:02.859848605Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:47:03 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:47:03.068385256Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:47:03 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:47:03.068487108Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:47:03 default-k8s-diff-port-186820 cri-dockerd[1213]: time="2025-09-29T14:47:03Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:51:47 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:51:47.621403254Z" level=warning msg="Error getting v2 registry: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:51:47 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:51:47.621450762Z" level=info msg="Attempting next endpoint for pull after error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:51:47 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:51:47.624443531Z" level=error msg="unexpected HTTP error handling" error="<nil>"
	Sep 29 14:51:47 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:51:47.624553095Z" level=error msg="Handler for POST /v1.46/images/create returned error: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host"
	Sep 29 14:52:07 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:52:07.833852890Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:52:08 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:52:08.036936793Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="registry.k8s.io/echoserver:1.4"
	Sep 29 14:52:08 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:52:08.037111843Z" level=info msg="Attempting next endpoint for pull after error: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/"
	Sep 29 14:52:08 default-k8s-diff-port-186820 cri-dockerd[1213]: time="2025-09-29T14:52:08Z" level=info msg="Stop pulling image registry.k8s.io/echoserver:1.4: 1.4: Pulling from echoserver"
	Sep 29 14:52:08 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:52:08.667827777Z" level=warning msg="reference for unknown type: " digest="sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" remote="docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93"
	Sep 29 14:52:08 default-k8s-diff-port-186820 dockerd[895]: time="2025-09-29T14:52:08.759190075Z" level=error msg="Not continuing with pull after error" error="toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	a41c3ebb08e2f       ba04bb24b9575                                                                                         17 minutes ago      Running             storage-provisioner       2                   7cbd08ca40ade       storage-provisioner
	a66150627d5dd       1611cd07b61d5                                                                                         18 minutes ago      Running             busybox                   1                   a809e47e5f523       busybox
	cfdd90547c839       138784d87c9c5                                                                                         18 minutes ago      Running             coredns                   1                   329e63b0ea158       coredns-66bc5c9577-wb8jw
	e8c1cb770762c       6fc32d66c1411                                                                                         18 minutes ago      Running             kube-proxy                1                   2d6e46f3a03ea       kube-proxy-xbpqv
	1b07bacc73620       ba04bb24b9575                                                                                         18 minutes ago      Exited              storage-provisioner       1                   7cbd08ca40ade       storage-provisioner
	19777c9fb07d6       a1894772a478e                                                                                         18 minutes ago      Running             etcd                      1                   ab8390d7e98a7       etcd-default-k8s-diff-port-186820
	2f7b7ee7a1f85       d291939e99406                                                                                         18 minutes ago      Running             kube-apiserver            1                   8b203e39b310f       kube-apiserver-default-k8s-diff-port-186820
	bf98b9af0d1be       996be7e86d9b3                                                                                         18 minutes ago      Running             kube-controller-manager   1                   339ad92a5ef7a       kube-controller-manager-default-k8s-diff-port-186820
	1befa8ef69edf       a25f5ef9c34c3                                                                                         18 minutes ago      Running             kube-scheduler            1                   71ac6bb7c0203       kube-scheduler-default-k8s-diff-port-186820
	81ed9a49211c4       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   19 minutes ago      Exited              busybox                   0                   8fc319b7ece9e       busybox
	09ce2b32e9384       138784d87c9c5                                                                                         19 minutes ago      Exited              coredns                   0                   27d4ee97939c6       coredns-66bc5c9577-wb8jw
	9bcc157f5d0b5       6fc32d66c1411                                                                                         19 minutes ago      Exited              kube-proxy                0                   cc4fbe899b17c       kube-proxy-xbpqv
	f8c7812825a6e       a1894772a478e                                                                                         19 minutes ago      Exited              etcd                      0                   ddc923564de22       etcd-default-k8s-diff-port-186820
	10a7ca49cb32f       996be7e86d9b3                                                                                         19 minutes ago      Exited              kube-controller-manager   0                   e0eeed2acb2c0       kube-controller-manager-default-k8s-diff-port-186820
	4143337be7961       d291939e99406                                                                                         19 minutes ago      Exited              kube-apiserver            0                   4e6884310d1b4       kube-apiserver-default-k8s-diff-port-186820
	976b937428341       a25f5ef9c34c3                                                                                         19 minutes ago      Exited              kube-scheduler            0                   c1d647945a1fb       kube-scheduler-default-k8s-diff-port-186820
	
	
	==> coredns [09ce2b32e938] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37889 - 23687 "HINFO IN 9099155277532789114.850322349326940009. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.027739509s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [cfdd90547c83] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/arm64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40133 - 52329 "HINFO IN 3160799206667991236.5911197496832820412. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.003928481s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-186820
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=default-k8s-diff-port-186820
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=aad2f46d67652a73456765446faac83429b43d5e
	                    minikube.k8s.io/name=default-k8s-diff-port-186820
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_29T14_35_04_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Sep 2025 14:35:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-186820
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Sep 2025 14:54:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Sep 2025 14:52:15 +0000   Mon, 29 Sep 2025 14:34:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Sep 2025 14:52:15 +0000   Mon, 29 Sep 2025 14:34:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Sep 2025 14:52:15 +0000   Mon, 29 Sep 2025 14:34:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Sep 2025 14:52:15 +0000   Mon, 29 Sep 2025 14:35:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-186820
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022300Ki
	  pods:               110
	System Info:
	  Machine ID:                 de263a2c3db04d31a5a11d96202af393
	  System UUID:                e2931296-2bdf-4282-ac79-ad3b5addc2af
	  Boot ID:                    b9a0c89a-b2b5-4b29-bf62-29a4a55f08f1
	  Kernel Version:             5.15.0-1084-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  docker://28.4.0
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 coredns-66bc5c9577-wb8jw                                100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     19m
	  kube-system                 etcd-default-k8s-diff-port-186820                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         19m
	  kube-system                 kube-apiserver-default-k8s-diff-port-186820             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-186820    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-xbpqv                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-default-k8s-diff-port-186820             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 metrics-server-746fcd58dc-nbbb9                         100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         19m
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-zfpvt              0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tdxbq                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  0 (0%)
	  memory             370Mi (4%)  170Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 19m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 19m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 19m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  19m                kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    19m                kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     19m                kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           19m                node-controller  Node default-k8s-diff-port-186820 event: Registered Node default-k8s-diff-port-186820 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Warning  CgroupV1                 18m                kubelet          cgroup v1 support is in maintenance mode, please migrate to cgroup v2
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-186820 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-186820 event: Registered Node default-k8s-diff-port-186820 in Controller
	
	
	==> dmesg <==
	
	
	==> etcd [19777c9fb07d] <==
	{"level":"warn","ts":"2025-09-29T14:35:55.470783Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.501021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.534435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.559773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59656","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.601975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.621776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.650672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59698","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.688877Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.710190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.731716Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.755574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59766","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.779014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.794347Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.815239Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.839666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.870291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.888255Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:55.906275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:56.014445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59884","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:45:53.819572Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1000}
	{"level":"info","ts":"2025-09-29T14:45:53.834054Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1000,"took":"13.987822ms","hash":3368848370,"current-db-size-bytes":3133440,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":3133440,"current-db-size-in-use":"3.1 MB"}
	{"level":"info","ts":"2025-09-29T14:45:53.834339Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3368848370,"revision":1000,"compact-revision":-1}
	{"level":"info","ts":"2025-09-29T14:50:53.825607Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1255}
	{"level":"info","ts":"2025-09-29T14:50:53.829240Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1255,"took":"3.343896ms","hash":4231518471,"current-db-size-bytes":3133440,"current-db-size":"3.1 MB","current-db-size-in-use-bytes":1810432,"current-db-size-in-use":"1.8 MB"}
	{"level":"info","ts":"2025-09-29T14:50:53.829298Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":4231518471,"revision":1255,"compact-revision":1000}
	
	
	==> etcd [f8c7812825a6] <==
	{"level":"warn","ts":"2025-09-29T14:34:59.961012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:34:59.989611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.013961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.131238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.165362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.178718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-29T14:35:00.335890Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59906","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-29T14:35:31.515574Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-29T14:35:31.515639Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"default-k8s-diff-port-186820","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	{"level":"error","ts":"2025-09-29T14:35:31.515761Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:35:38.518572Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-29T14:35:38.518837Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:35:38.518942Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"ea7e25599daad906","current-leader-member-id":"ea7e25599daad906"}
	{"level":"info","ts":"2025-09-29T14:35:38.519126Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-09-29T14:35:38.519190Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-09-29T14:35:38.520466Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:35:38.520662Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:35:38.520723Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-29T14:35:38.520937Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-29T14:35:38.521047Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.76.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-29T14:35:38.521148Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:35:38.523420Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"error","ts":"2025-09-29T14:35:38.523699Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.76.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-29T14:35:38.523861Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.76.2:2380"}
	{"level":"info","ts":"2025-09-29T14:35:38.523984Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"default-k8s-diff-port-186820","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.76.2:2380"],"advertise-client-urls":["https://192.168.76.2:2379"]}
	
	
	==> kernel <==
	 14:54:39 up  6:37,  0 users,  load average: 0.27, 0.51, 1.09
	Linux default-k8s-diff-port-186820 5.15.0-1084-aws #91~20.04.1-Ubuntu SMP Fri May 2 07:00:04 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kube-apiserver [2f7b7ee7a1f8] <==
	I0929 14:50:57.960812       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:51:53.878011       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:51:57.959980       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:51:57.960026       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:51:57.960040       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:51:57.961201       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:51:57.961263       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:51:57.961281       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:52:01.038983       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:53:11.151696       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:53:12.097344       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0929 14:53:57.960292       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:53:57.960346       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0929 14:53:57.960569       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0929 14:53:57.961457       1 handler_proxy.go:99] no RequestInfo found in the context
	E0929 14:53:57.961638       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0929 14:53:57.961688       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0929 14:54:22.031763       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0929 14:54:24.328205       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-apiserver [4143337be796] <==
	W0929 14:35:40.768572       1 logging.go:55] [core] [Channel #155 SubChannel #157]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.771050       1 logging.go:55] [core] [Channel #1 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.772569       1 logging.go:55] [core] [Channel #175 SubChannel #177]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.809474       1 logging.go:55] [core] [Channel #171 SubChannel #173]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.810937       1 logging.go:55] [core] [Channel #163 SubChannel #165]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.883220       1 logging.go:55] [core] [Channel #187 SubChannel #189]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.914767       1 logging.go:55] [core] [Channel #211 SubChannel #213]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.938695       1 logging.go:55] [core] [Channel #123 SubChannel #125]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:40.952633       1 logging.go:55] [core] [Channel #143 SubChannel #145]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.010308       1 logging.go:55] [core] [Channel #131 SubChannel #133]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.041243       1 logging.go:55] [core] [Channel #255 SubChannel #257]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.061925       1 logging.go:55] [core] [Channel #115 SubChannel #117]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.061925       1 logging.go:55] [core] [Channel #7 SubChannel #9]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.074342       1 logging.go:55] [core] [Channel #107 SubChannel #109]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.085131       1 logging.go:55] [core] [Channel #235 SubChannel #237]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.142169       1 logging.go:55] [core] [Channel #239 SubChannel #241]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.182434       1 logging.go:55] [core] [Channel #103 SubChannel #105]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.276796       1 logging.go:55] [core] [Channel #167 SubChannel #169]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.288334       1 logging.go:55] [core] [Channel #63 SubChannel #65]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.312844       1 logging.go:55] [core] [Channel #43 SubChannel #45]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.321633       1 logging.go:55] [core] [Channel #227 SubChannel #229]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.453520       1 logging.go:55] [core] [Channel #159 SubChannel #161]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.511380       1 logging.go:55] [core] [Channel #247 SubChannel #249]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.514919       1 logging.go:55] [core] [Channel #47 SubChannel #49]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0929 14:35:41.544607       1 logging.go:55] [core] [Channel #147 SubChannel #149]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-controller-manager [10a7ca49cb32] <==
	I0929 14:35:08.245993       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0929 14:35:08.246024       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0929 14:35:08.246122       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0929 14:35:08.246248       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0929 14:35:08.246518       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0929 14:35:08.246534       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0929 14:35:08.246546       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0929 14:35:08.246905       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0929 14:35:08.247039       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0929 14:35:08.247158       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0929 14:35:08.247615       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0929 14:35:08.248062       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0929 14:35:08.248483       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0929 14:35:08.249759       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0929 14:35:08.252890       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I0929 14:35:08.252919       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0929 14:35:08.253268       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0929 14:35:08.253482       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0929 14:35:08.253497       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0929 14:35:08.253505       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0929 14:35:08.252959       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0929 14:35:08.255343       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I0929 14:35:08.263079       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-186820" podCIDRs=["10.244.0.0/24"]
	I0929 14:35:08.274345       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	E0929 14:35:30.887552       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-746fcd58dc\" failed with pods \"metrics-server-746fcd58dc-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [bf98b9af0d1b] <==
	I0929 14:48:32.564362       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:49:02.449054       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:49:02.571738       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:49:32.454301       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:49:32.580766       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:50:02.459822       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:50:02.588416       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:50:32.463921       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:50:32.596589       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:51:02.468809       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:51:02.606655       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:51:32.472878       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:51:32.620483       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:52:02.477215       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:52:02.629370       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:52:32.482413       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:52:32.636763       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:53:02.487822       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:53:02.644074       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:53:32.492654       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:53:32.651117       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:54:02.497913       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:54:02.659834       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E0929 14:54:32.502573       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0929 14:54:32.667433       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [9bcc157f5d0b] <==
	I0929 14:35:10.165639       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:35:10.306443       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:35:10.407234       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:35:10.407291       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:35:10.407379       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:35:10.454449       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:35:10.454585       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:35:10.482598       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:35:10.483189       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:35:10.483207       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:35:10.487889       1 config.go:200] "Starting service config controller"
	I0929 14:35:10.487906       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:35:10.500758       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:35:10.500830       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:35:10.500868       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:35:10.500873       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:35:10.502037       1 config.go:309] "Starting node config controller"
	I0929 14:35:10.502047       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:35:10.502055       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:35:10.589794       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:35:10.601734       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:35:10.601768       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [e8c1cb770762] <==
	I0929 14:35:59.296846       1 server_linux.go:53] "Using iptables proxy"
	I0929 14:35:59.378257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0929 14:35:59.478710       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0929 14:35:59.478747       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E0929 14:35:59.478880       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0929 14:35:59.506784       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0929 14:35:59.506846       1 server_linux.go:132] "Using iptables Proxier"
	I0929 14:35:59.521655       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0929 14:35:59.522106       1 server.go:527] "Version info" version="v1.34.0"
	I0929 14:35:59.522130       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:35:59.523731       1 config.go:200] "Starting service config controller"
	I0929 14:35:59.523747       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0929 14:35:59.523763       1 config.go:106] "Starting endpoint slice config controller"
	I0929 14:35:59.523767       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0929 14:35:59.523789       1 config.go:403] "Starting serviceCIDR config controller"
	I0929 14:35:59.523793       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0929 14:35:59.528361       1 config.go:309] "Starting node config controller"
	I0929 14:35:59.528400       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0929 14:35:59.528409       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0929 14:35:59.627795       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0929 14:35:59.627912       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0929 14:35:59.627937       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [1befa8ef69ed] <==
	I0929 14:35:54.618445       1 serving.go:386] Generated self-signed cert in-memory
	W0929 14:35:56.799923       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0929 14:35:56.799966       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0929 14:35:56.799977       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0929 14:35:56.799985       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0929 14:35:56.977173       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0929 14:35:56.977203       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0929 14:35:56.979841       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0929 14:35:56.979948       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:56.979971       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:56.979994       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0929 14:35:57.080386       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [976b93742834] <==
	E0929 14:35:01.362533       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:35:01.362617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:35:01.362822       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 14:35:01.362945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0929 14:35:01.363017       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0929 14:35:01.363047       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0929 14:35:01.363131       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0929 14:35:01.362858       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:35:01.363213       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0929 14:35:02.178153       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0929 14:35:02.197115       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0929 14:35:02.251908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0929 14:35:02.290175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_arm64.s:1223" type="*v1.ConfigMap"
	E0929 14:35:02.370997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0929 14:35:02.397453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0929 14:35:02.481809       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0929 14:35:02.500622       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0929 14:35:02.525150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	I0929 14:35:04.414622       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:31.491694       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0929 14:35:31.491803       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0929 14:35:31.491814       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0929 14:35:31.491833       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0929 14:35:31.492128       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0929 14:35:31.492146       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Sep 29 14:52:56 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:52:56.616993    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:53:00 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:00.613758    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:53:04 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:04.610901    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:53:08 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:08.618577    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:53:15 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:15.610133    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:53:15 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:15.610774    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:53:23 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:23.608857    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:53:28 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:28.608396    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:53:30 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:30.608872    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:53:35 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:35.608772    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:53:40 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:40.610498    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:53:44 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:44.608405    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:53:46 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:46.610008    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:53:52 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:52.611314    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:53:55 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:53:55.608665    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:54:00 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:00.610161    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:54:04 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:04.611441    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:54:09 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:09.608712    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:54:13 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:13.608718    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:54:17 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:17.609438    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:54:24 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:24.608966    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:54:26 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:26.609885    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	Sep 29 14:54:31 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:31.610073    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubernetes-dashboard\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93\\\": ErrImagePull: Error response from daemon: toomanyrequests: You have reached your unauthenticated pull rate limit. https://www.docker.com/increase-rate-limit\"" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tdxbq" podUID="4e2ddb81-1cba-47a1-897a-4f8a7912d3f3"
	Sep 29 14:54:38 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:38.617323    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Docker Image Format v1 and Docker Image manifest version 2, schema 1 support has been removed. Suggest the author of registry.k8s.io/echoserver:1.4 to upgrade the image to the OCI Format or Docker Image manifest v2, schema 2. More information at https://docs.docker.com/go/deprecated-image-specs/\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-zfpvt" podUID="ac110471-f111-4931-b3aa-bdc227132dfe"
	Sep 29 14:54:39 default-k8s-diff-port-186820 kubelet[1396]: E0929 14:54:39.609210    1396 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: Error response from daemon: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain on 192.168.76.1:53: no such host\"" pod="kube-system/metrics-server-746fcd58dc-nbbb9" podUID="43fcdf52-1359-4a10-8f64-c721fa11c8c2"
	
	
	==> storage-provisioner [1b07bacc7362] <==
	I0929 14:35:59.254007       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0929 14:36:29.256948       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a41c3ebb08e2] <==
	W0929 14:54:15.538641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:17.541650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:17.546401       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:19.550026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:19.557436       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:21.561206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:21.565632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:23.569204       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:23.576048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:25.578904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:25.583795       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:27.586650       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:27.593900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:29.597246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:29.604157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:31.607503       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:31.615123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:33.618808       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:33.623686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:35.627642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:35.633090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:37.636347       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:37.644189       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:39.647525       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0929 14:54:39.652734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 describe pod metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-186820 describe pod metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq: exit status 1 (89.215647ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-nbbb9" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-zfpvt" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-tdxbq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context default-k8s-diff-port-186820 describe pod metrics-server-746fcd58dc-nbbb9 dashboard-metrics-scraper-6ffb444bf9-zfpvt kubernetes-dashboard-855c9754f9-tdxbq: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.92s)

                                                
                                    

Test pass (303/341)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 6.68
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.09
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 6.84
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.08
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
22 TestOffline 91.13
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 215.51
29 TestAddons/serial/Volcano 42.12
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 11.01
35 TestAddons/parallel/Registry 17.86
36 TestAddons/parallel/RegistryCreds 0.91
37 TestAddons/parallel/Ingress 18.54
38 TestAddons/parallel/InspektorGadget 6.28
39 TestAddons/parallel/MetricsServer 5.92
41 TestAddons/parallel/CSI 33.95
42 TestAddons/parallel/Headlamp 23.71
43 TestAddons/parallel/CloudSpanner 6.55
44 TestAddons/parallel/LocalPath 53.15
45 TestAddons/parallel/NvidiaDevicePlugin 5.65
46 TestAddons/parallel/Yakd 11.72
48 TestAddons/StoppedEnableDisable 11.22
49 TestCertOptions 37.79
50 TestCertExpiration 271.64
51 TestDockerFlags 50.18
52 TestForceSystemdFlag 44.97
53 TestForceSystemdEnv 49.34
59 TestErrorSpam/setup 34.73
60 TestErrorSpam/start 0.8
61 TestErrorSpam/status 1.05
62 TestErrorSpam/pause 1.4
63 TestErrorSpam/unpause 1.55
64 TestErrorSpam/stop 11.02
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 69.04
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 56.39
71 TestFunctional/serial/KubeContext 0.06
72 TestFunctional/serial/KubectlGetPods 0.09
75 TestFunctional/serial/CacheCmd/cache/add_remote 2.77
76 TestFunctional/serial/CacheCmd/cache/add_local 1.03
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
78 TestFunctional/serial/CacheCmd/cache/list 0.06
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
81 TestFunctional/serial/CacheCmd/cache/delete 0.12
82 TestFunctional/serial/MinikubeKubectlCmd 0.14
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.13
84 TestFunctional/serial/ExtraConfig 54.99
85 TestFunctional/serial/ComponentHealth 0.13
86 TestFunctional/serial/LogsCmd 1.27
87 TestFunctional/serial/LogsFileCmd 1.3
88 TestFunctional/serial/InvalidService 4.53
90 TestFunctional/parallel/ConfigCmd 0.49
92 TestFunctional/parallel/DryRun 0.69
93 TestFunctional/parallel/InternationalLanguage 0.26
94 TestFunctional/parallel/StatusCmd 1.14
98 TestFunctional/parallel/ServiceCmdConnect 9.69
99 TestFunctional/parallel/AddonsCmd 0.35
100 TestFunctional/parallel/PersistentVolumeClaim 28.36
102 TestFunctional/parallel/SSHCmd 0.81
103 TestFunctional/parallel/CpCmd 2.11
105 TestFunctional/parallel/FileSync 0.4
106 TestFunctional/parallel/CertSync 2.09
110 TestFunctional/parallel/NodeLabels 0.1
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.36
114 TestFunctional/parallel/License 0.39
115 TestFunctional/parallel/Version/short 0.05
116 TestFunctional/parallel/Version/components 1.05
117 TestFunctional/parallel/ImageCommands/ImageListShort 0.21
118 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
119 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
120 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
121 TestFunctional/parallel/ImageCommands/ImageBuild 3.57
122 TestFunctional/parallel/ImageCommands/Setup 0.68
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.29
124 TestFunctional/parallel/DockerEnv/bash 1.52
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.1
126 TestFunctional/parallel/UpdateContextCmd/no_changes 0.16
127 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
128 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.19
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.4
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
132 TestFunctional/parallel/ProfileCmd/profile_not_create 0.54
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
134 TestFunctional/parallel/ProfileCmd/profile_list 0.52
135 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.5
136 TestFunctional/parallel/ProfileCmd/profile_json_output 0.55
138 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.79
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.34
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.12
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/ServiceCmd/DeployApp 6.27
149 TestFunctional/parallel/ServiceCmd/List 0.51
150 TestFunctional/parallel/ServiceCmd/JSONOutput 0.51
151 TestFunctional/parallel/ServiceCmd/HTTPS 0.38
152 TestFunctional/parallel/ServiceCmd/Format 0.39
153 TestFunctional/parallel/ServiceCmd/URL 0.39
154 TestFunctional/parallel/MountCmd/any-port 8.5
155 TestFunctional/parallel/MountCmd/specific-port 1.8
156 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
157 TestFunctional/delete_echo-server_images 0.04
158 TestFunctional/delete_my-image_image 0.02
159 TestFunctional/delete_minikube_cached_images 0.02
164 TestMultiControlPlane/serial/StartCluster 147.58
167 TestMultiControlPlane/serial/AddWorkerNode 18.68
168 TestMultiControlPlane/serial/NodeLabels 0.16
169 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.73
170 TestMultiControlPlane/serial/CopyFile 21.04
171 TestMultiControlPlane/serial/StopSecondaryNode 11.98
172 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.83
173 TestMultiControlPlane/serial/RestartSecondaryNode 44.33
174 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.39
175 TestMultiControlPlane/serial/RestartClusterKeepsNodes 219.14
176 TestMultiControlPlane/serial/DeleteSecondaryNode 11.33
177 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.76
178 TestMultiControlPlane/serial/StopCluster 32.75
179 TestMultiControlPlane/serial/RestartCluster 112.63
180 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.78
181 TestMultiControlPlane/serial/AddSecondaryNode 45.12
182 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.39
185 TestImageBuild/serial/Setup 32.71
186 TestImageBuild/serial/NormalBuild 1.97
187 TestImageBuild/serial/BuildWithBuildArg 0.98
188 TestImageBuild/serial/BuildWithDockerIgnore 0.87
189 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.85
193 TestJSONOutput/start/Command 68.53
194 TestJSONOutput/start/Audit 0
196 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/pause/Command 0.6
200 TestJSONOutput/pause/Audit 0
202 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/unpause/Command 0.53
206 TestJSONOutput/unpause/Audit 0
208 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
211 TestJSONOutput/stop/Command 10.93
212 TestJSONOutput/stop/Audit 0
214 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
215 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
216 TestErrorJSONOutput 0.24
218 TestKicCustomNetwork/create_custom_network 34.6
219 TestKicCustomNetwork/use_default_bridge_network 32.9
220 TestKicExistingNetwork 35.93
221 TestKicCustomSubnet 34.32
222 TestKicStaticIP 33.58
223 TestMainNoArgs 0.05
224 TestMinikubeProfile 74.19
227 TestMountStart/serial/StartWithMountFirst 9.12
228 TestMountStart/serial/VerifyMountFirst 0.26
229 TestMountStart/serial/StartWithMountSecond 8.29
230 TestMountStart/serial/VerifyMountSecond 0.28
231 TestMountStart/serial/DeleteFirst 1.48
232 TestMountStart/serial/VerifyMountPostDelete 0.27
233 TestMountStart/serial/Stop 1.21
234 TestMountStart/serial/RestartStopped 8.31
235 TestMountStart/serial/VerifyMountPostStop 0.26
238 TestMultiNode/serial/FreshStart2Nodes 70.31
239 TestMultiNode/serial/DeployApp2Nodes 39.08
240 TestMultiNode/serial/PingHostFrom2Pods 1.01
241 TestMultiNode/serial/AddNode 16.47
242 TestMultiNode/serial/MultiNodeLabels 0.13
243 TestMultiNode/serial/ProfileList 0.88
244 TestMultiNode/serial/CopyFile 10.99
245 TestMultiNode/serial/StopNode 2.28
246 TestMultiNode/serial/StartAfterStop 9.63
247 TestMultiNode/serial/RestartKeepsNodes 77.33
248 TestMultiNode/serial/DeleteNode 5.87
249 TestMultiNode/serial/StopMultiNode 21.63
250 TestMultiNode/serial/RestartMultiNode 54.73
251 TestMultiNode/serial/ValidateNameConflict 36.82
256 TestPreload 152.22
258 TestScheduledStopUnix 106.41
259 TestSkaffold 140.56
261 TestInsufficientStorage 11.5
262 TestRunningBinaryUpgrade 78.57
264 TestKubernetesUpgrade 386.98
265 TestMissingContainerUpgrade 101.26
267 TestPause/serial/Start 107.02
269 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
270 TestNoKubernetes/serial/StartWithK8s 33.9
271 TestPause/serial/SecondStartNoReconfiguration 53.09
272 TestNoKubernetes/serial/StartWithStopK8s 19.88
273 TestNoKubernetes/serial/Start 8.27
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.3
275 TestNoKubernetes/serial/ProfileList 1.13
276 TestNoKubernetes/serial/Stop 1.46
277 TestNoKubernetes/serial/StartNoArgs 8.83
278 TestPause/serial/Pause 0.79
279 TestPause/serial/VerifyStatus 0.38
280 TestPause/serial/Unpause 0.64
281 TestPause/serial/PauseAgain 0.81
282 TestPause/serial/DeletePaused 2.35
283 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.34
284 TestPause/serial/VerifyDeletedResources 0.45
296 TestStoppedBinaryUpgrade/Setup 0.74
297 TestStoppedBinaryUpgrade/Upgrade 88.68
298 TestStoppedBinaryUpgrade/MinikubeLogs 1.12
306 TestNetworkPlugins/group/auto/Start 78.32
307 TestNetworkPlugins/group/auto/KubeletFlags 0.29
308 TestNetworkPlugins/group/auto/NetCatPod 10.3
309 TestNetworkPlugins/group/auto/DNS 0.19
310 TestNetworkPlugins/group/auto/Localhost 0.14
311 TestNetworkPlugins/group/auto/HairPin 0.15
312 TestNetworkPlugins/group/kindnet/Start 67.44
313 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
315 TestNetworkPlugins/group/kindnet/NetCatPod 11.26
316 TestNetworkPlugins/group/kindnet/DNS 0.18
317 TestNetworkPlugins/group/kindnet/Localhost 0.17
318 TestNetworkPlugins/group/kindnet/HairPin 0.16
320 TestNetworkPlugins/group/custom-flannel/Start 62.6
321 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.41
322 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.4
323 TestNetworkPlugins/group/custom-flannel/DNS 0.2
324 TestNetworkPlugins/group/custom-flannel/Localhost 0.17
325 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
326 TestNetworkPlugins/group/false/Start 81.74
327 TestNetworkPlugins/group/false/KubeletFlags 0.36
328 TestNetworkPlugins/group/false/NetCatPod 10.41
329 TestNetworkPlugins/group/false/DNS 0.19
330 TestNetworkPlugins/group/false/Localhost 0.2
331 TestNetworkPlugins/group/false/HairPin 0.23
332 TestNetworkPlugins/group/enable-default-cni/Start 79.26
333 TestNetworkPlugins/group/flannel/Start 122.48
334 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
335 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.28
336 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
337 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
338 TestNetworkPlugins/group/enable-default-cni/HairPin 0.19
339 TestNetworkPlugins/group/bridge/Start 74.15
340 TestNetworkPlugins/group/flannel/ControllerPod 6
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.4
342 TestNetworkPlugins/group/flannel/NetCatPod 11.49
343 TestNetworkPlugins/group/flannel/DNS 0.2
344 TestNetworkPlugins/group/flannel/Localhost 0.16
345 TestNetworkPlugins/group/flannel/HairPin 0.22
346 TestNetworkPlugins/group/kubenet/Start 70.88
347 TestNetworkPlugins/group/bridge/KubeletFlags 0.35
348 TestNetworkPlugins/group/bridge/NetCatPod 12.33
349 TestNetworkPlugins/group/bridge/DNS 0.32
350 TestNetworkPlugins/group/bridge/Localhost 0.25
351 TestNetworkPlugins/group/bridge/HairPin 0.27
353 TestStartStop/group/old-k8s-version/serial/FirstStart 78.23
354 TestNetworkPlugins/group/kubenet/KubeletFlags 0.45
355 TestNetworkPlugins/group/kubenet/NetCatPod 11.4
356 TestNetworkPlugins/group/kubenet/DNS 0.22
357 TestNetworkPlugins/group/kubenet/Localhost 0.15
358 TestNetworkPlugins/group/kubenet/HairPin 0.2
360 TestStartStop/group/no-preload/serial/FirstStart 80.38
361 TestStartStop/group/old-k8s-version/serial/DeployApp 10.45
362 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.65
363 TestStartStop/group/old-k8s-version/serial/Stop 11.22
364 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.22
365 TestStartStop/group/old-k8s-version/serial/SecondStart 29.97
367 TestStartStop/group/no-preload/serial/DeployApp 9.39
368 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.12
369 TestStartStop/group/no-preload/serial/Stop 10.92
370 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
371 TestStartStop/group/no-preload/serial/SecondStart 52.56
375 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.25
376 TestStartStop/group/old-k8s-version/serial/Pause 3.02
378 TestStartStop/group/embed-certs/serial/FirstStart 72.09
379 TestStartStop/group/embed-certs/serial/DeployApp 10.5
380 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
381 TestStartStop/group/no-preload/serial/Pause 2.95
383 TestStartStop/group/newest-cni/serial/FirstStart 43.5
384 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.28
385 TestStartStop/group/embed-certs/serial/Stop 11.24
386 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.24
387 TestStartStop/group/embed-certs/serial/SecondStart 59.5
388 TestStartStop/group/newest-cni/serial/DeployApp 0
389 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.16
390 TestStartStop/group/newest-cni/serial/Stop 10.97
391 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
392 TestStartStop/group/newest-cni/serial/SecondStart 17.01
393 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
394 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
395 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.36
396 TestStartStop/group/newest-cni/serial/Pause 3.25
399 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 47.56
400 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.4
401 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
402 TestStartStop/group/default-k8s-diff-port/serial/Stop 11
403 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
404 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 52.31
408 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
409 TestStartStop/group/embed-certs/serial/Pause 2.93
410 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
411 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.84
x
+
TestDownloadOnly/v1.28.0/json-events (6.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-974524 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-974524 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.674729642s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (6.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0929 13:01:53.451129 1127640 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
I0929 13:01:53.451216 1127640 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-974524
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-974524: exit status 85 (91.18757ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-974524 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-974524 │ jenkins │ v1.37.0 │ 29 Sep 25 13:01 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:01:46
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:01:46.819967 1127646 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:01:46.820191 1127646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:01:46.820219 1127646 out.go:374] Setting ErrFile to fd 2...
	I0929 13:01:46.820241 1127646 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:01:46.820587 1127646 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	W0929 13:01:46.820787 1127646 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21652-1125775/.minikube/config/config.json: open /home/jenkins/minikube-integration/21652-1125775/.minikube/config/config.json: no such file or directory
	I0929 13:01:46.821253 1127646 out.go:368] Setting JSON to true
	I0929 13:01:46.822145 1127646 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17059,"bootTime":1759133848,"procs":155,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 13:01:46.822247 1127646 start.go:140] virtualization:  
	I0929 13:01:46.826430 1127646 out.go:99] [download-only-974524] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	W0929 13:01:46.826629 1127646 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball: no such file or directory
	I0929 13:01:46.826749 1127646 notify.go:220] Checking for updates...
	I0929 13:01:46.830360 1127646 out.go:171] MINIKUBE_LOCATION=21652
	I0929 13:01:46.833879 1127646 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:01:46.836830 1127646 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:01:46.839719 1127646 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 13:01:46.842608 1127646 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0929 13:01:46.848469 1127646 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 13:01:46.848890 1127646 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:01:46.874012 1127646 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 13:01:46.874133 1127646 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:01:46.926429 1127646 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-29 13:01:46.917500162 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:01:46.926534 1127646 docker.go:318] overlay module found
	I0929 13:01:46.929499 1127646 out.go:99] Using the docker driver based on user configuration
	I0929 13:01:46.929534 1127646 start.go:304] selected driver: docker
	I0929 13:01:46.929547 1127646 start.go:924] validating driver "docker" against <nil>
	I0929 13:01:46.929669 1127646 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:01:46.984936 1127646 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:61 SystemTime:2025-09-29 13:01:46.975950347 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:01:46.985096 1127646 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 13:01:46.985381 1127646 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0929 13:01:46.985543 1127646 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 13:01:46.988633 1127646 out.go:171] Using Docker driver with root privileges
	I0929 13:01:46.991574 1127646 cni.go:84] Creating CNI manager for ""
	I0929 13:01:46.991661 1127646 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 13:01:46.991675 1127646 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 13:01:46.991757 1127646 start.go:348] cluster config:
	{Name:download-only-974524 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-974524 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:01:46.994682 1127646 out.go:99] Starting "download-only-974524" primary control-plane node in "download-only-974524" cluster
	I0929 13:01:46.994715 1127646 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:01:46.997483 1127646 out.go:99] Pulling base image v0.0.48 ...
	I0929 13:01:46.997517 1127646 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0929 13:01:46.997689 1127646 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:01:47.014925 1127646 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 13:01:47.015118 1127646 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 13:01:47.015222 1127646 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 13:01:47.058823 1127646 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0929 13:01:47.058850 1127646 cache.go:58] Caching tarball of preloaded images
	I0929 13:01:47.059022 1127646 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I0929 13:01:47.063449 1127646 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0929 13:01:47.063480 1127646 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	I0929 13:01:47.145582 1127646 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4?checksum=md5:002a73d62a3b066a08573cf3da2c8cb4 -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4
	I0929 13:01:51.801451 1127646 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	I0929 13:01:51.801701 1127646 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-974524 host does not exist
	  To start a cluster, run: "minikube start -p download-only-974524"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-974524
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (6.84s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-812289 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-812289 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (6.835854976s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (6.84s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0929 13:02:00.741985 1127640 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
I0929 13:02:00.742028 1127640 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-812289
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-812289: exit status 85 (83.714319ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                     ARGS                                                                                      │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-974524 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-974524 │ jenkins │ v1.37.0 │ 29 Sep 25 13:01 UTC │                     │
	│ delete  │ --all                                                                                                                                                                         │ minikube             │ jenkins │ v1.37.0 │ 29 Sep 25 13:01 UTC │ 29 Sep 25 13:01 UTC │
	│ delete  │ -p download-only-974524                                                                                                                                                       │ download-only-974524 │ jenkins │ v1.37.0 │ 29 Sep 25 13:01 UTC │ 29 Sep 25 13:01 UTC │
	│ start   │ -o=json --download-only -p download-only-812289 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=docker --driver=docker  --container-runtime=docker │ download-only-812289 │ jenkins │ v1.37.0 │ 29 Sep 25 13:01 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/29 13:01:53
	Running on machine: ip-172-31-30-239
	Binary: Built with gc go1.24.6 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0929 13:01:53.949294 1127847 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:01:53.949470 1127847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:01:53.949483 1127847 out.go:374] Setting ErrFile to fd 2...
	I0929 13:01:53.949489 1127847 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:01:53.949780 1127847 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:01:53.950200 1127847 out.go:368] Setting JSON to true
	I0929 13:01:53.951081 1127847 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17066,"bootTime":1759133848,"procs":150,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 13:01:53.951154 1127847 start.go:140] virtualization:  
	I0929 13:01:53.954692 1127847 out.go:99] [download-only-812289] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 13:01:53.954922 1127847 notify.go:220] Checking for updates...
	I0929 13:01:53.958042 1127847 out.go:171] MINIKUBE_LOCATION=21652
	I0929 13:01:53.961057 1127847 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:01:53.964051 1127847 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:01:53.967111 1127847 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 13:01:53.970193 1127847 out.go:171] MINIKUBE_BIN=out/minikube-linux-arm64
	W0929 13:01:53.975940 1127847 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0929 13:01:53.976242 1127847 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:01:54.002835 1127847 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 13:01:54.002989 1127847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:01:54.065514 1127847 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-29 13:01:54.056240872 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:01:54.065628 1127847 docker.go:318] overlay module found
	I0929 13:01:54.068737 1127847 out.go:99] Using the docker driver based on user configuration
	I0929 13:01:54.068786 1127847 start.go:304] selected driver: docker
	I0929 13:01:54.068799 1127847 start.go:924] validating driver "docker" against <nil>
	I0929 13:01:54.068952 1127847 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:01:54.122695 1127847 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:49 SystemTime:2025-09-29 13:01:54.113780263 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:01:54.122862 1127847 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0929 13:01:54.123164 1127847 start_flags.go:410] Using suggested 3072MB memory alloc based on sys=7834MB, container=7834MB
	I0929 13:01:54.123322 1127847 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0929 13:01:54.126594 1127847 out.go:171] Using Docker driver with root privileges
	I0929 13:01:54.129651 1127847 cni.go:84] Creating CNI manager for ""
	I0929 13:01:54.129735 1127847 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0929 13:01:54.129749 1127847 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0929 13:01:54.129837 1127847 start.go:348] cluster config:
	{Name:download-only-812289 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:download-only-812289 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CR
ISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:01:54.132815 1127847 out.go:99] Starting "download-only-812289" primary control-plane node in "download-only-812289" cluster
	I0929 13:01:54.132846 1127847 cache.go:123] Beginning downloading kic base image for docker with docker
	I0929 13:01:54.135916 1127847 out.go:99] Pulling base image v0.0.48 ...
	I0929 13:01:54.135954 1127847 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:01:54.136132 1127847 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0929 13:01:54.152438 1127847 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0929 13:01:54.152610 1127847 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0929 13:01:54.152634 1127847 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0929 13:01:54.152644 1127847 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0929 13:01:54.152652 1127847 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0929 13:01:54.193044 1127847 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	I0929 13:01:54.193068 1127847 cache.go:58] Caching tarball of preloaded images
	I0929 13:01:54.193233 1127847 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime docker
	I0929 13:01:54.196302 1127847 out.go:99] Downloading Kubernetes v1.34.0 preload ...
	I0929 13:01:54.196333 1127847 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4 ...
	I0929 13:01:54.285766 1127847 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.0/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4?checksum=md5:0b3d43bc03104538fd9d40ba6a11edba -> /home/jenkins/minikube-integration/21652-1125775/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-docker-overlay2-arm64.tar.lz4
	
	
	* The control-plane node download-only-812289 host does not exist
	  To start a cluster, run: "minikube start -p download-only-812289"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-812289
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I0929 13:02:02.073521 1127640 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-522564 --alsologtostderr --binary-mirror http://127.0.0.1:41237 --driver=docker  --container-runtime=docker
helpers_test.go:175: Cleaning up "binary-mirror-522564" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-522564
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestOffline (91.13s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-arm64 start -p offline-docker-272506 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-arm64 start -p offline-docker-272506 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m28.968677158s)
helpers_test.go:175: Cleaning up "offline-docker-272506" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p offline-docker-272506
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p offline-docker-272506: (2.156559758s)
--- PASS: TestOffline (91.13s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-214477
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-214477: exit status 85 (71.074565ms)

                                                
                                                
-- stdout --
	* Profile "addons-214477" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214477"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-214477
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-214477: exit status 85 (75.866639ms)

                                                
                                                
-- stdout --
	* Profile "addons-214477" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-214477"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (215.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-arm64 start -p addons-214477 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-arm64 start -p addons-214477 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m35.50993343s)
--- PASS: TestAddons/Setup (215.51s)

                                                
                                    
x
+
TestAddons/serial/Volcano (42.12s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:876: volcano-admission stabilized in 71.571501ms
addons_test.go:868: volcano-scheduler stabilized in 72.749229ms
addons_test.go:884: volcano-controller stabilized in 73.141512ms
addons_test.go:890: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-scheduler-799f64f894-t9l7k" [8c1aed85-1836-4f9a-9768-637497b1f960] Running
addons_test.go:890: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.002946168s
addons_test.go:894: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-admission-589c7dd587-tj577" [d5d535bc-ea79-413c-a909-862410cbfaac] Running
addons_test.go:894: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003342956s
addons_test.go:898: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:352: "volcano-controllers-7dc6969b45-lv4s7" [f8b93b5a-e43c-494d-8865-5d0a21a91524] Running
addons_test.go:898: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003593911s
addons_test.go:903: (dbg) Run:  kubectl --context addons-214477 delete -n volcano-system job volcano-admission-init
addons_test.go:909: (dbg) Run:  kubectl --context addons-214477 create -f testdata/vcjob.yaml
addons_test.go:917: (dbg) Run:  kubectl --context addons-214477 get vcjob -n my-volcano
addons_test.go:935: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:352: "test-job-nginx-0" [2e9fc4c6-3108-4861-b6dd-ca8cd148d5bf] Pending
helpers_test.go:352: "test-job-nginx-0" [2e9fc4c6-3108-4861-b6dd-ca8cd148d5bf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "test-job-nginx-0" [2e9fc4c6-3108-4861-b6dd-ca8cd148d5bf] Running
addons_test.go:935: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 14.003579973s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-214477 addons disable volcano --alsologtostderr -v=1: (11.462837078s)
--- PASS: TestAddons/serial/Volcano (42.12s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-214477 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-214477 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-214477 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-214477 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [59f7f026-10d8-4f08-9b4c-4823817ec210] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [59f7f026-10d8-4f08-9b4c-4823817ec210] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003128904s
addons_test.go:694: (dbg) Run:  kubectl --context addons-214477 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-214477 describe sa gcp-auth-test
addons_test.go:720: (dbg) Run:  kubectl --context addons-214477 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:744: (dbg) Run:  kubectl --context addons-214477 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.01s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.86s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 9.279303ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-mzqpf" [e3ba057f-86fb-4cdc-991f-8d6a095013f3] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.006646708s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-zkh6s" [620a90f3-2e93-4884-ad30-0032e2367a17] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003791436s
addons_test.go:392: (dbg) Run:  kubectl --context addons-214477 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-214477 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-214477 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.830721594s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 ip
2025/09/29 13:06:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.86s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.91s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 7.126198ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-arm64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-214477
addons_test.go:332: (dbg) Run:  kubectl --context addons-214477 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.91s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.54s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-214477 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-214477 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-214477 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [94a07a8c-34d4-4f75-82a6-52e1598b30bb] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [94a07a8c-34d4-4f75-82a6-52e1598b30bb] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.004700941s
I0929 13:08:11.695858 1127640 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-214477 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-214477 addons disable ingress --alsologtostderr -v=1: (7.726119443s)
--- PASS: TestAddons/parallel/Ingress (18.54s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-gqwlk" [1b85aede-f2f5-46b4-85b2-5a46ee2f3801] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.006236894s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (6.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.92s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 47.505559ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-k8g8z" [27829a8e-c241-49b3-82af-63c25122c379] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003879077s
addons_test.go:463: (dbg) Run:  kubectl --context addons-214477 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (33.95s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0929 13:06:52.380075 1127640 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0929 13:06:52.383848 1127640 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0929 13:06:52.383876 1127640 kapi.go:107] duration metric: took 6.803871ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 6.815433ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-214477 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-214477 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [d0a0626c-b248-4c2a-af9a-b202bfafdc3d] Pending
helpers_test.go:352: "task-pv-pod" [d0a0626c-b248-4c2a-af9a-b202bfafdc3d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [d0a0626c-b248-4c2a-af9a-b202bfafdc3d] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.00368241s
addons_test.go:572: (dbg) Run:  kubectl --context addons-214477 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-214477 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-214477 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-214477 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-214477 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-214477 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-214477 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7ace442e-8be1-47a2-b025-de11b63973a6] Pending
helpers_test.go:352: "task-pv-pod-restore" [7ace442e-8be1-47a2-b025-de11b63973a6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7ace442e-8be1-47a2-b025-de11b63973a6] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004824192s
addons_test.go:614: (dbg) Run:  kubectl --context addons-214477 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-214477 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-214477 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-214477 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.793805222s)
--- PASS: TestAddons/parallel/CSI (33.95s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (23.71s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-214477 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-bzd7z" [cb492bd3-8769-4b60-a900-98e4f943e467] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-bzd7z" [cb492bd3-8769-4b60-a900-98e4f943e467] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.003542163s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-214477 addons disable headlamp --alsologtostderr -v=1: (5.738754852s)
--- PASS: TestAddons/parallel/Headlamp (23.71s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-fz6vr" [3c3660c8-7608-4d77-bb6d-d320a1308f31] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.00330335s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (53.15s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-214477 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-214477 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-214477 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [8b4489ed-63e0-4a6a-8a92-08e6b8a3b60e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [8b4489ed-63e0-4a6a-8a92-08e6b8a3b60e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [8b4489ed-63e0-4a6a-8a92-08e6b8a3b60e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.003903849s
addons_test.go:967: (dbg) Run:  kubectl --context addons-214477 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 ssh "cat /opt/local-path-provisioner/pvc-697bf51c-45eb-4688-aeb2-df5b77a7150e_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-214477 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-214477 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-214477 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.876765315s)
--- PASS: TestAddons/parallel/LocalPath (53.15s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-6dwcw" [5d5ac5d4-4264-4172-a9c0-294224c389da] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0042746s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.65s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-5n5fs" [592ac754-dcbb-45c2-9fd1-44089955de43] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003442735s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-arm64 -p addons-214477 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-arm64 -p addons-214477 addons disable yakd --alsologtostderr -v=1: (5.717878897s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (11.22s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-214477
addons_test.go:172: (dbg) Done: out/minikube-linux-arm64 stop -p addons-214477: (10.94946544s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-214477
addons_test.go:180: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-214477
addons_test.go:185: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-214477
--- PASS: TestAddons/StoppedEnableDisable (11.22s)

                                                
                                    
x
+
TestCertOptions (37.79s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-530446 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker
E0929 13:53:59.883568 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:53:59.890062 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:53:59.901417 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:53:59.922643 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:53:59.963998 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:00.048119 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:00.228152 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:00.550146 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:01.192155 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:02.473458 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:05.034941 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:10.156712 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:54:20.398446 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-530446 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (35.01234647s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-530446 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-530446 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-530446 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-530446" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-530446
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-530446: (2.124702575s)
--- PASS: TestCertOptions (37.79s)

                                                
                                    
x
+
TestCertExpiration (271.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-605649 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-605649 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=docker: (45.624336793s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-605649 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-605649 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=docker: (43.490076377s)
helpers_test.go:175: Cleaning up "cert-expiration-605649" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-605649
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-605649: (2.522281487s)
--- PASS: TestCertExpiration (271.64s)

                                                
                                    
x
+
TestDockerFlags (50.18s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-linux-arm64 start -p docker-flags-851225 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:51: (dbg) Done: out/minikube-linux-arm64 start -p docker-flags-851225 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (47.217250212s)
docker_test.go:56: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-851225 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-linux-arm64 -p docker-flags-851225 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-851225" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-flags-851225
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-flags-851225: (2.226817473s)
--- PASS: TestDockerFlags (50.18s)

                                                
                                    
x
+
TestForceSystemdFlag (44.97s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-381565 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
E0929 13:52:50.245719 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-381565 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.793131562s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-381565 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-381565" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-381565
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-381565: (2.697636459s)
--- PASS: TestForceSystemdFlag (44.97s)

                                                
                                    
x
+
TestForceSystemdEnv (49.34s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-578106 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-578106 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (46.328484947s)
docker_test.go:110: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-env-578106 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-578106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-578106
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-578106: (2.340602475s)
--- PASS: TestForceSystemdEnv (49.34s)

                                                
                                    
x
+
TestErrorSpam/setup (34.73s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-500852 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500852 --driver=docker  --container-runtime=docker
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-500852 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-500852 --driver=docker  --container-runtime=docker: (34.732859042s)
--- PASS: TestErrorSpam/setup (34.73s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.05s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 status
--- PASS: TestErrorSpam/status (1.05s)

                                                
                                    
x
+
TestErrorSpam/pause (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 pause
--- PASS: TestErrorSpam/pause (1.40s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.55s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 unpause
--- PASS: TestErrorSpam/unpause (1.55s)

                                                
                                    
x
+
TestErrorSpam/stop (11.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 stop: (10.815539863s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-arm64 -p nospam-500852 --log_dir /tmp/nospam-500852 stop
--- PASS: TestErrorSpam/stop (11.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21652-1125775/.minikube/files/etc/test/nested/copy/1127640/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.04s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085003 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
functional_test.go:2239: (dbg) Done: out/minikube-linux-arm64 start -p functional-085003 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (1m9.040946641s)
--- PASS: TestFunctional/serial/StartWithProxy (69.04s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (56.39s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0929 13:10:36.826819 1127640 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085003 --alsologtostderr -v=8
E0929 13:10:38.297800 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:38.304883 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:38.316229 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:38.337586 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:38.378965 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:38.460282 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:38.622428 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:38.944430 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:39.586879 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:40.868218 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:43.429631 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:48.551040 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:10:58.793425 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:11:19.274837 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-arm64 start -p functional-085003 --alsologtostderr -v=8: (56.388007343s)
functional_test.go:678: soft start took 56.390086471s for "functional-085003" cluster.
I0929 13:11:33.215211 1127640 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (56.39s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-085003 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.77s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-085003 /tmp/TestFunctionalserialCacheCmdcacheadd_local2290539618/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cache add minikube-local-cache-test:functional-085003
functional_test.go:1109: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cache delete minikube-local-cache-test:functional-085003
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-085003
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.03s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (310.271342ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 kubectl -- --context functional-085003 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-085003 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.13s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (54.99s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085003 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0929 13:12:00.237239 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-arm64 start -p functional-085003 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (54.986824051s)
functional_test.go:776: restart took 54.986928225s for "functional-085003" cluster.
I0929 13:12:34.487755 1127640 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (54.99s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-085003 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.27s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-arm64 -p functional-085003 logs: (1.271350315s)
--- PASS: TestFunctional/serial/LogsCmd (1.27s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.3s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 logs --file /tmp/TestFunctionalserialLogsFileCmd981524293/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-arm64 -p functional-085003 logs --file /tmp/TestFunctionalserialLogsFileCmd981524293/001/logs.txt: (1.296609221s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.30s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-085003 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-085003
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-085003: exit status 115 (527.365426ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30570 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-085003 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.53s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 config get cpus: exit status 14 (60.08106ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 config get cpus: exit status 14 (86.721216ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-085003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (292.458571ms)

                                                
                                                
-- stdout --
	* [functional-085003] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:13:18.472661 1169603 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:13:18.472823 1169603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:13:18.472829 1169603 out.go:374] Setting ErrFile to fd 2...
	I0929 13:13:18.472835 1169603 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:13:18.473076 1169603 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:13:18.473465 1169603 out.go:368] Setting JSON to false
	I0929 13:13:18.474522 1169603 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17751,"bootTime":1759133848,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 13:13:18.474590 1169603 start.go:140] virtualization:  
	I0929 13:13:18.478187 1169603 out.go:179] * [functional-085003] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	I0929 13:13:18.484683 1169603 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:13:18.484924 1169603 notify.go:220] Checking for updates...
	I0929 13:13:18.491010 1169603 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:13:18.496811 1169603 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:13:18.504721 1169603 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 13:13:18.507573 1169603 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 13:13:18.510496 1169603 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:13:18.515774 1169603 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:13:18.516381 1169603 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:13:18.561736 1169603 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 13:13:18.561860 1169603 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:13:18.674817 1169603 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 13:13:18.664569157 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:13:18.674919 1169603 docker.go:318] overlay module found
	I0929 13:13:18.678413 1169603 out.go:179] * Using the docker driver based on existing profile
	I0929 13:13:18.681241 1169603 start.go:304] selected driver: docker
	I0929 13:13:18.681268 1169603 start.go:924] validating driver "docker" against &{Name:functional-085003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-085003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:13:18.681356 1169603 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:13:18.684794 1169603 out.go:203] 
	W0929 13:13:18.689091 1169603 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0929 13:13:18.691975 1169603 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085003 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-arm64 start -p functional-085003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-085003 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (260.478534ms)

                                                
                                                
-- stdout --
	* [functional-085003] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:13:20.315190 1170191 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:13:20.315402 1170191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:13:20.315415 1170191 out.go:374] Setting ErrFile to fd 2...
	I0929 13:13:20.315421 1170191 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:13:20.315795 1170191 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:13:20.316202 1170191 out.go:368] Setting JSON to false
	I0929 13:13:20.317277 1170191 start.go:130] hostinfo: {"hostname":"ip-172-31-30-239","uptime":17753,"bootTime":1759133848,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1084-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"92f46a7d-c249-4c12-924a-77f64874c910"}
	I0929 13:13:20.317357 1170191 start.go:140] virtualization:  
	I0929 13:13:20.320787 1170191 out.go:179] * [functional-085003] minikube v1.37.0 sur Ubuntu 20.04 (arm64)
	I0929 13:13:20.323722 1170191 notify.go:220] Checking for updates...
	I0929 13:13:20.324274 1170191 out.go:179]   - MINIKUBE_LOCATION=21652
	I0929 13:13:20.327568 1170191 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0929 13:13:20.330545 1170191 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	I0929 13:13:20.338424 1170191 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	I0929 13:13:20.342498 1170191 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I0929 13:13:20.345440 1170191 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0929 13:13:20.349413 1170191 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:13:20.350013 1170191 driver.go:421] Setting default libvirt URI to qemu:///system
	I0929 13:13:20.396652 1170191 docker.go:123] docker version: linux-28.1.1:Docker Engine - Community
	I0929 13:13:20.396771 1170191 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:13:20.473351 1170191 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:53 SystemTime:2025-09-29 13:13:20.463525815 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:13:20.473594 1170191 docker.go:318] overlay module found
	I0929 13:13:20.476738 1170191 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0929 13:13:20.479675 1170191 start.go:304] selected driver: docker
	I0929 13:13:20.479707 1170191 start.go:924] validating driver "docker" against &{Name:functional-085003 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-085003 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p
MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0929 13:13:20.479799 1170191 start.go:935] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0929 13:13:20.483411 1170191 out.go:203] 
	W0929 13:13:20.486380 1170191 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0929 13:13:20.489356 1170191 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-085003 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-085003 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-tk885" [5075be01-38fd-462f-b511-2663647a3d8d] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-connect-7d85dfc575-tk885" [5075be01-38fd-462f-b511-2663647a3d8d] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 9.002989375s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.49.2:31852
functional_test.go:1680: http://192.168.49.2:31852: success! body:
Request served by hello-node-connect-7d85dfc575-tk885

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.49.2:31852
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (9.69s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4c488c18-7dd4-4275-9c30-4f8cd602208d] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004171153s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-085003 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-085003 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-085003 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-085003 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [e9d7be01-d778-4810-9d78-f622f77f5e3b] Pending
helpers_test.go:352: "sp-pod" [e9d7be01-d778-4810-9d78-f622f77f5e3b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [e9d7be01-d778-4810-9d78-f622f77f5e3b] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.00341423s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-085003 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-085003 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-085003 delete -f testdata/storage-provisioner/pod.yaml: (1.274127694s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-085003 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [6ec0d2a2-e0f9-4864-93e9-f0463262c0d2] Pending
helpers_test.go:352: "sp-pod" [6ec0d2a2-e0f9-4864-93e9-f0463262c0d2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [6ec0d2a2-e0f9-4864-93e9-f0463262c0d2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.004612042s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-085003 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh -n functional-085003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cp functional-085003:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2604590134/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh -n functional-085003 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh -n functional-085003 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.11s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/1127640/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo cat /etc/test/nested/copy/1127640/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/1127640.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo cat /etc/ssl/certs/1127640.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/1127640.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo cat /usr/share/ca-certificates/1127640.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/11276402.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo cat /etc/ssl/certs/11276402.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/11276402.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo cat /usr/share/ca-certificates/11276402.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-085003 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 ssh "sudo systemctl is-active crio": exit status 1 (358.999201ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-linux-arm64 -p functional-085003 version -o=json --components: (1.053771644s)
--- PASS: TestFunctional/parallel/Version/components (1.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085003 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/minikube-local-cache-test:functional-085003
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-085003
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085003 image ls --format short --alsologtostderr:
I0929 13:13:32.268550 1172432 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:32.268782 1172432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:32.268882 1172432 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:32.268904 1172432 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:32.269191 1172432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:32.269890 1172432 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:32.270099 1172432 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:32.270686 1172432 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:32.287892 1172432 ssh_runner.go:195] Run: systemctl --version
I0929 13:13:32.287950 1172432 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:32.305749 1172432 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:32.401390 1172432 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085003 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/kube-apiserver              │ v1.34.0           │ d291939e99406 │ 83.7MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.0           │ a25f5ef9c34c3 │ 50.5MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 138784d87c9c5 │ 72.1MB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ ba04bb24b9575 │ 29MB   │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc      │ 1611cd07b61d5 │ 3.55MB │
│ docker.io/library/nginx                     │ latest            │ 17848b7d08d19 │ 198MB  │
│ registry.k8s.io/etcd                        │ 3.6.4-0           │ a1894772a478e │ 205MB  │
│ registry.k8s.io/pause                       │ 3.3               │ 3d18732f8686c │ 484kB  │
│ localhost/my-image                          │ functional-085003 │ 00f8924ed7c50 │ 1.41MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.0           │ 6fc32d66c1411 │ 74.7MB │
│ docker.io/kicbase/echo-server               │ functional-085003 │ ce2d2cda2d858 │ 4.78MB │
│ docker.io/kicbase/echo-server               │ latest            │ ce2d2cda2d858 │ 4.78MB │
│ registry.k8s.io/pause                       │ 3.1               │ 8057e0500773a │ 525kB  │
│ docker.io/library/minikube-local-cache-test │ functional-085003 │ d55a659a957cc │ 30B    │
│ registry.k8s.io/kube-controller-manager     │ v1.34.0           │ 996be7e86d9b3 │ 71.5MB │
│ docker.io/library/nginx                     │ alpine            │ 35f3cbee4fb77 │ 52.9MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ d7b100cd9a77b │ 514kB  │
│ registry.k8s.io/pause                       │ latest            │ 8cb2091f603e7 │ 240kB  │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085003 image ls --format table --alsologtostderr:
I0929 13:13:36.516477 1172761 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:36.516717 1172761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:36.516751 1172761 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:36.516771 1172761 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:36.517045 1172761 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:36.517714 1172761 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:36.517865 1172761 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:36.518343 1172761 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:36.536676 1172761 ssh_runner.go:195] Run: systemctl --version
I0929 13:13:36.536734 1172761 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:36.553090 1172761 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:36.649389 1172761 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
E0929 13:15:38.296787 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:16:06.006125 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085003 image ls --format json --alsologtostderr:
[{"id":"d55a659a957cc32e7732260c550c9fb7ec515db0ca10cda0ac5d80904b92a834","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-085003"],"size":"30"},{"id":"17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"198000000"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-085003","docker.io/kicbase/echo-server:latest"],"size":"4780000"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"484000"},{"id":"00f8924ed7c5038b98375a31e578ca8112714b2393ca6cc6464a539cb4f9faad","repoDigests":[],"repoTags":["localhost/my-image:functional-085003"],"size":"1410000"},{"id":"d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"83700000"
},{"id":"a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.0"],"size":"50500000"},{"id":"6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"74700000"},{"id":"996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"71500000"},{"id":"138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"72100000"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"525000"},{"id":"35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"52900000"},{"id":"a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a59870831
64bd00bc0e","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"205000000"},{"id":"d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"514000"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29000000"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3550000"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085003 image ls --format json --alsologtostderr:
I0929 13:13:36.296764 1172730 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:36.296937 1172730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:36.296968 1172730 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:36.296989 1172730 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:36.297265 1172730 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:36.297897 1172730 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:36.298061 1172730 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:36.298547 1172730 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:36.316244 1172730 ssh_runner.go:195] Run: systemctl --version
I0929 13:13:36.316301 1172730 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:36.342698 1172730 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:36.441215 1172730 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-arm64 -p functional-085003 image ls --format yaml --alsologtostderr:
- id: d291939e994064911484215449d0ab96c535b02adc4fc5d0ad4e438cf71465be
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "83700000"
- id: 6fc32d66c141152245438e6512df788cb52d64a1617e33561950b0e7a4675abf
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "74700000"
- id: 138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "72100000"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-085003
- docker.io/kicbase/echo-server:latest
size: "4780000"
- id: d55a659a957cc32e7732260c550c9fb7ec515db0ca10cda0ac5d80904b92a834
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-085003
size: "30"
- id: a25f5ef9c34c37c649f3b4f9631a169221ac2d6f41d9767c7588cd355f76f9ee
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "50500000"
- id: a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "205000000"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29000000"
- id: 35f3cbee4fb77c3efb39f2723a21ce181906139442a37de8ffc52d89641d9936
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "52900000"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3550000"
- id: 996be7e86d9b3a549d718de63713d9fea9db1f45ac44863a6770292d7b463570
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "71500000"
- id: 17848b7d08d196d4e7b420f48ba286132a07937574561d4a6c085651f5177f97
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "198000000"
- id: d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "514000"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "484000"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "525000"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085003 image ls --format yaml --alsologtostderr:
I0929 13:13:32.501310 1172464 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:32.501557 1172464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:32.501737 1172464 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:32.501785 1172464 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:32.502194 1172464 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:32.503091 1172464 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:32.503350 1172464 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:32.503969 1172464 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:32.523737 1172464 ssh_runner.go:195] Run: systemctl --version
I0929 13:13:32.523818 1172464 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:32.543281 1172464 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:32.642017 1172464 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 ssh pgrep buildkitd: exit status 1 (294.413982ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image build -t localhost/my-image:functional-085003 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-arm64 -p functional-085003 image build -t localhost/my-image:functional-085003 testdata/build --alsologtostderr: (3.056772985s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-arm64 -p functional-085003 image build -t localhost/my-image:functional-085003 testdata/build --alsologtostderr:
I0929 13:13:33.022494 1172553 out.go:360] Setting OutFile to fd 1 ...
I0929 13:13:33.023252 1172553 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:33.023273 1172553 out.go:374] Setting ErrFile to fd 2...
I0929 13:13:33.023280 1172553 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0929 13:13:33.023579 1172553 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
I0929 13:13:33.024241 1172553 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:33.026244 1172553 config.go:182] Loaded profile config "functional-085003": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
I0929 13:13:33.026786 1172553 cli_runner.go:164] Run: docker container inspect functional-085003 --format={{.State.Status}}
I0929 13:13:33.044862 1172553 ssh_runner.go:195] Run: systemctl --version
I0929 13:13:33.044929 1172553 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-085003
I0929 13:13:33.061866 1172553 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33933 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/functional-085003/id_rsa Username:docker}
I0929 13:13:33.161290 1172553 build_images.go:161] Building image from path: /tmp/build.2686402990.tar
I0929 13:13:33.161383 1172553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0929 13:13:33.170781 1172553 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2686402990.tar
I0929 13:13:33.174880 1172553 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2686402990.tar: stat -c "%s %y" /var/lib/minikube/build/build.2686402990.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2686402990.tar': No such file or directory
I0929 13:13:33.174908 1172553 ssh_runner.go:362] scp /tmp/build.2686402990.tar --> /var/lib/minikube/build/build.2686402990.tar (3072 bytes)
I0929 13:13:33.201857 1172553 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2686402990
I0929 13:13:33.210958 1172553 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2686402990 -xf /var/lib/minikube/build/build.2686402990.tar
I0929 13:13:33.220965 1172553 docker.go:361] Building image: /var/lib/minikube/build/build.2686402990
I0929 13:13:33.221046 1172553 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-085003 /var/lib/minikube/build/build.2686402990
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.5s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0B / 828.50kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9 527B / 527B done
#5 sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02 1.47kB / 1.47kB done
#5 sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 828.50kB / 828.50kB 0.4s done
#5 extracting sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34 0.0s done
#5 DONE 0.6s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.0s done
#8 writing image sha256:00f8924ed7c5038b98375a31e578ca8112714b2393ca6cc6464a539cb4f9faad done
#8 naming to localhost/my-image:functional-085003 done
#8 DONE 0.1s
I0929 13:13:35.989304 1172553 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-085003 /var/lib/minikube/build/build.2686402990: (2.768230574s)
I0929 13:13:35.989369 1172553 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2686402990
I0929 13:13:35.999706 1172553 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2686402990.tar
I0929 13:13:36.016223 1172553 build_images.go:217] Built localhost/my-image:functional-085003 from /tmp/build.2686402990.tar
I0929 13:13:36.016256 1172553 build_images.go:133] succeeded building to: functional-085003
I0929 13:13:36.016262 1172553 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-085003
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image load --daemon kicbase/echo-server:functional-085003 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.29s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:514: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-085003 docker-env) && out/minikube-linux-arm64 status -p functional-085003"
functional_test.go:537: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-arm64 -p functional-085003 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image load --daemon kicbase/echo-server:functional-085003 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-085003
functional_test.go:260: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image load --daemon kicbase/echo-server:functional-085003 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image save kicbase/echo-server:functional-085003 /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image rm kicbase/echo-server:functional-085003 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image load /home/jenkins/workspace/Docker_Linux_docker_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1330: Took "429.165499ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1344: Took "94.741282ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-085003
functional_test.go:439: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 image save --daemon kicbase/echo-server:functional-085003 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-085003
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1381: Took "482.0506ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1394: Took "70.114489ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-085003 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-085003 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-085003 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-085003 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 1167916: os: process already finished
helpers_test.go:525: unable to kill pid 1167714: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-085003 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-085003 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [59e2d5eb-58b9-46cd-8949-ddecc0a66295] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [59e2d5eb-58b9-46cd-8949-ddecc0a66295] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.003623048s
I0929 13:12:59.677983 1127640 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.34s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-085003 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.100.246.229 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-085003 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-085003 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-085003 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-x9877" [02558f52-62b8-45c2-8811-ee21e898e01b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:352: "hello-node-75c85bcc94-x9877" [02558f52-62b8-45c2-8811-ee21e898e01b] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.00418007s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 service list -o json
functional_test.go:1504: Took "506.2518ms" to run "out/minikube-linux-arm64 -p functional-085003 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.49.2:30881
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30881
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (8.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdany-port1728533961/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1759151598605691327" to /tmp/TestFunctionalparallelMountCmdany-port1728533961/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1759151598605691327" to /tmp/TestFunctionalparallelMountCmdany-port1728533961/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1759151598605691327" to /tmp/TestFunctionalparallelMountCmdany-port1728533961/001/test-1759151598605691327
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (442.853486ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 13:13:19.049587 1127640 retry.go:31] will retry after 650.429625ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 29 13:13 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 29 13:13 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 29 13:13 test-1759151598605691327
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh cat /mount-9p/test-1759151598605691327
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-085003 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [aaf773b1-aeaa-4c31-b58c-10034514c2ca] Pending
helpers_test.go:352: "busybox-mount" [aaf773b1-aeaa-4c31-b58c-10034514c2ca] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0929 13:13:22.159218 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox-mount" [aaf773b1-aeaa-4c31-b58c-10034514c2ca] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [aaf773b1-aeaa-4c31-b58c-10034514c2ca] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 5.004041976s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-085003 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdany-port1728533961/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (8.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdspecific-port2215760306/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (363.448616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 13:13:27.463681 1127640 retry.go:31] will retry after 405.424375ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdspecific-port2215760306/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 ssh "sudo umount -f /mount-9p": exit status 1 (271.230576ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-085003 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdspecific-port2215760306/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T" /mount1: exit status 1 (554.748474ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0929 13:13:29.454596 1127640 retry.go:31] will retry after 302.534577ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-085003 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-085003 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-085003 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2748648500/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-085003
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-085003
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-085003
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (147.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0929 13:20:38.296195 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (2m26.697140613s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (147.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (18.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 node add --alsologtostderr -v 5: (16.86610573s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5: (1.815268806s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (18.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-399583 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.73s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.730892732s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.73s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (21.04s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 status --output json --alsologtostderr -v 5: (1.367272986s)
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp testdata/cp-test.txt ha-399583:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2188976079/001/cp-test_ha-399583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583:/home/docker/cp-test.txt ha-399583-m02:/home/docker/cp-test_ha-399583_ha-399583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test_ha-399583_ha-399583-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583:/home/docker/cp-test.txt ha-399583-m03:/home/docker/cp-test_ha-399583_ha-399583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test_ha-399583_ha-399583-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583:/home/docker/cp-test.txt ha-399583-m04:/home/docker/cp-test_ha-399583_ha-399583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test_ha-399583_ha-399583-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp testdata/cp-test.txt ha-399583-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2188976079/001/cp-test_ha-399583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m02:/home/docker/cp-test.txt ha-399583:/home/docker/cp-test_ha-399583-m02_ha-399583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test_ha-399583-m02_ha-399583.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m02:/home/docker/cp-test.txt ha-399583-m03:/home/docker/cp-test_ha-399583-m02_ha-399583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test_ha-399583-m02_ha-399583-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m02:/home/docker/cp-test.txt ha-399583-m04:/home/docker/cp-test_ha-399583-m02_ha-399583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test_ha-399583-m02_ha-399583-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp testdata/cp-test.txt ha-399583-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2188976079/001/cp-test_ha-399583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m03:/home/docker/cp-test.txt ha-399583:/home/docker/cp-test_ha-399583-m03_ha-399583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test_ha-399583-m03_ha-399583.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m03:/home/docker/cp-test.txt ha-399583-m02:/home/docker/cp-test_ha-399583-m03_ha-399583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test_ha-399583-m03_ha-399583-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m03:/home/docker/cp-test.txt ha-399583-m04:/home/docker/cp-test_ha-399583-m03_ha-399583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test_ha-399583-m03_ha-399583-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp testdata/cp-test.txt ha-399583-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2188976079/001/cp-test_ha-399583-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m04:/home/docker/cp-test.txt ha-399583:/home/docker/cp-test_ha-399583-m04_ha-399583.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583 "sudo cat /home/docker/cp-test_ha-399583-m04_ha-399583.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m04:/home/docker/cp-test.txt ha-399583-m02:/home/docker/cp-test_ha-399583-m04_ha-399583-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m02 "sudo cat /home/docker/cp-test_ha-399583-m04_ha-399583-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 cp ha-399583-m04:/home/docker/cp-test.txt ha-399583-m03:/home/docker/cp-test_ha-399583-m04_ha-399583-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 ssh -n ha-399583-m03 "sudo cat /home/docker/cp-test_ha-399583-m04_ha-399583-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (21.04s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (11.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 node stop m02 --alsologtostderr -v 5: (11.058199316s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5: exit status 7 (924.399142ms)

                                                
                                                
-- stdout --
	ha-399583
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-399583-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-399583-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-399583-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:22:01.120240 1197356 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:22:01.120448 1197356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:22:01.120461 1197356 out.go:374] Setting ErrFile to fd 2...
	I0929 13:22:01.120466 1197356 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:22:01.120843 1197356 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:22:01.121068 1197356 out.go:368] Setting JSON to false
	I0929 13:22:01.121110 1197356 mustload.go:65] Loading cluster: ha-399583
	I0929 13:22:01.121219 1197356 notify.go:220] Checking for updates...
	I0929 13:22:01.121560 1197356 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:22:01.121584 1197356 status.go:174] checking status of ha-399583 ...
	I0929 13:22:01.122427 1197356 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:22:01.144766 1197356 status.go:371] ha-399583 host status = "Running" (err=<nil>)
	I0929 13:22:01.144794 1197356 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:22:01.145142 1197356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583
	I0929 13:22:01.170124 1197356 host.go:66] Checking if "ha-399583" exists ...
	I0929 13:22:01.170452 1197356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:22:01.170510 1197356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583
	I0929 13:22:01.197083 1197356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33938 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583/id_rsa Username:docker}
	I0929 13:22:01.298381 1197356 ssh_runner.go:195] Run: systemctl --version
	I0929 13:22:01.304067 1197356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:22:01.326359 1197356 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:22:01.450186 1197356 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:true NGoroutines:72 SystemTime:2025-09-29 13:22:01.430348948 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:22:01.450925 1197356 kubeconfig.go:125] found "ha-399583" server: "https://192.168.49.254:8443"
	I0929 13:22:01.450970 1197356 api_server.go:166] Checking apiserver status ...
	I0929 13:22:01.451015 1197356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:22:01.466679 1197356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2303/cgroup
	I0929 13:22:01.478765 1197356 api_server.go:182] apiserver freezer: "12:freezer:/docker/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/kubepods/burstable/pod9006d65b5341b1d5588f644c130b04e4/59b02d97e1876884a4f58cc4bf513b565abc54a84ff16fdb7be29887130fe60e"
	I0929 13:22:01.478851 1197356 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4ff0a10009db36f72e1cda963547db5481dd70edbba45987446b8160fb5656e0/kubepods/burstable/pod9006d65b5341b1d5588f644c130b04e4/59b02d97e1876884a4f58cc4bf513b565abc54a84ff16fdb7be29887130fe60e/freezer.state
	I0929 13:22:01.520129 1197356 api_server.go:204] freezer state: "THAWED"
	I0929 13:22:01.520166 1197356 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 13:22:01.530916 1197356 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 13:22:01.530944 1197356 status.go:463] ha-399583 apiserver status = Running (err=<nil>)
	I0929 13:22:01.530954 1197356 status.go:176] ha-399583 status: &{Name:ha-399583 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:22:01.530970 1197356 status.go:174] checking status of ha-399583-m02 ...
	I0929 13:22:01.531312 1197356 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:22:01.569414 1197356 status.go:371] ha-399583-m02 host status = "Stopped" (err=<nil>)
	I0929 13:22:01.569437 1197356 status.go:384] host is not running, skipping remaining checks
	I0929 13:22:01.569443 1197356 status.go:176] ha-399583-m02 status: &{Name:ha-399583-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:22:01.569463 1197356 status.go:174] checking status of ha-399583-m03 ...
	I0929 13:22:01.569777 1197356 cli_runner.go:164] Run: docker container inspect ha-399583-m03 --format={{.State.Status}}
	I0929 13:22:01.609863 1197356 status.go:371] ha-399583-m03 host status = "Running" (err=<nil>)
	I0929 13:22:01.611319 1197356 host.go:66] Checking if "ha-399583-m03" exists ...
	I0929 13:22:01.611669 1197356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m03
	I0929 13:22:01.639723 1197356 host.go:66] Checking if "ha-399583-m03" exists ...
	I0929 13:22:01.640045 1197356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:22:01.640087 1197356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m03
	I0929 13:22:01.659411 1197356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33948 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m03/id_rsa Username:docker}
	I0929 13:22:01.757663 1197356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:22:01.778335 1197356 kubeconfig.go:125] found "ha-399583" server: "https://192.168.49.254:8443"
	I0929 13:22:01.778365 1197356 api_server.go:166] Checking apiserver status ...
	I0929 13:22:01.778409 1197356 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:22:01.791481 1197356 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2242/cgroup
	I0929 13:22:01.801360 1197356 api_server.go:182] apiserver freezer: "12:freezer:/docker/79340bc25d484e8796d6f7947e6143563d7de132f5fb24566205cf6c2438bb83/kubepods/burstable/podcfbe8d4d2bf552d3970388b2d2ee2891/94df0ba3945d06be18e13aa7f026b9113210bb67a15a2d10068ab7c2652ed071"
	I0929 13:22:01.801472 1197356 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/79340bc25d484e8796d6f7947e6143563d7de132f5fb24566205cf6c2438bb83/kubepods/burstable/podcfbe8d4d2bf552d3970388b2d2ee2891/94df0ba3945d06be18e13aa7f026b9113210bb67a15a2d10068ab7c2652ed071/freezer.state
	I0929 13:22:01.810596 1197356 api_server.go:204] freezer state: "THAWED"
	I0929 13:22:01.810671 1197356 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0929 13:22:01.819145 1197356 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0929 13:22:01.819173 1197356 status.go:463] ha-399583-m03 apiserver status = Running (err=<nil>)
	I0929 13:22:01.819183 1197356 status.go:176] ha-399583-m03 status: &{Name:ha-399583-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:22:01.819200 1197356 status.go:174] checking status of ha-399583-m04 ...
	I0929 13:22:01.819517 1197356 cli_runner.go:164] Run: docker container inspect ha-399583-m04 --format={{.State.Status}}
	I0929 13:22:01.837588 1197356 status.go:371] ha-399583-m04 host status = "Running" (err=<nil>)
	I0929 13:22:01.837614 1197356 host.go:66] Checking if "ha-399583-m04" exists ...
	I0929 13:22:01.837937 1197356 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-399583-m04
	I0929 13:22:01.854866 1197356 host.go:66] Checking if "ha-399583-m04" exists ...
	I0929 13:22:01.855180 1197356 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:22:01.855227 1197356 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-399583-m04
	I0929 13:22:01.873380 1197356 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33953 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/ha-399583-m04/id_rsa Username:docker}
	I0929 13:22:01.973773 1197356 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:22:01.985588 1197356 status.go:176] ha-399583-m04 status: &{Name:ha-399583-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (11.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (44.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 node start m02 --alsologtostderr -v 5: (42.488942136s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5: (1.675875103s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (44.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.387383008s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.39s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (219.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 stop --alsologtostderr -v 5
E0929 13:22:50.246255 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:50.252603 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:50.263929 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:50.285272 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:50.326644 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:50.407970 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:50.569328 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:50.890686 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:51.532807 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:52.814157 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:22:55.376885 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:23:00.498905 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:23:10.741016 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 stop --alsologtostderr -v 5: (33.979938848s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 start --wait true --alsologtostderr -v 5
E0929 13:23:31.223028 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:24:12.184313 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:25:34.108674 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:25:38.296608 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 start --wait true --alsologtostderr -v 5: (3m4.971489119s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (219.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 node delete m03 --alsologtostderr -v 5: (10.386166268s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 stop --alsologtostderr -v 5
E0929 13:27:01.367868 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 stop --alsologtostderr -v 5: (32.635545863s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5: exit status 7 (115.014336ms)

                                                
                                                
-- stdout --
	ha-399583
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-399583-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-399583-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:27:12.443942 1224913 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:27:12.444067 1224913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:27:12.444079 1224913 out.go:374] Setting ErrFile to fd 2...
	I0929 13:27:12.444083 1224913 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:27:12.444331 1224913 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:27:12.444571 1224913 out.go:368] Setting JSON to false
	I0929 13:27:12.444621 1224913 mustload.go:65] Loading cluster: ha-399583
	I0929 13:27:12.444697 1224913 notify.go:220] Checking for updates...
	I0929 13:27:12.445746 1224913 config.go:182] Loaded profile config "ha-399583": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:27:12.445775 1224913 status.go:174] checking status of ha-399583 ...
	I0929 13:27:12.446407 1224913 cli_runner.go:164] Run: docker container inspect ha-399583 --format={{.State.Status}}
	I0929 13:27:12.463805 1224913 status.go:371] ha-399583 host status = "Stopped" (err=<nil>)
	I0929 13:27:12.463828 1224913 status.go:384] host is not running, skipping remaining checks
	I0929 13:27:12.463835 1224913 status.go:176] ha-399583 status: &{Name:ha-399583 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:27:12.463865 1224913 status.go:174] checking status of ha-399583-m02 ...
	I0929 13:27:12.464170 1224913 cli_runner.go:164] Run: docker container inspect ha-399583-m02 --format={{.State.Status}}
	I0929 13:27:12.492258 1224913 status.go:371] ha-399583-m02 host status = "Stopped" (err=<nil>)
	I0929 13:27:12.492322 1224913 status.go:384] host is not running, skipping remaining checks
	I0929 13:27:12.492342 1224913 status.go:176] ha-399583-m02 status: &{Name:ha-399583-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:27:12.492374 1224913 status.go:174] checking status of ha-399583-m04 ...
	I0929 13:27:12.492746 1224913 cli_runner.go:164] Run: docker container inspect ha-399583-m04 --format={{.State.Status}}
	I0929 13:27:12.509948 1224913 status.go:371] ha-399583-m04 host status = "Stopped" (err=<nil>)
	I0929 13:27:12.509970 1224913 status.go:384] host is not running, skipping remaining checks
	I0929 13:27:12.509983 1224913 status.go:176] ha-399583-m04 status: &{Name:ha-399583-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (32.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (112.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker
E0929 13:27:50.246796 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:28:17.950195 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=docker: (1m51.645772514s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (112.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 node add --control-plane --alsologtostderr -v 5: (43.422106201s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-399583 status --alsologtostderr -v 5: (1.694959812s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.39s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.391675699s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.39s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (32.71s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-linux-arm64 start -p image-672328 --driver=docker  --container-runtime=docker
image_test.go:69: (dbg) Done: out/minikube-linux-arm64 start -p image-672328 --driver=docker  --container-runtime=docker: (32.708557586s)
--- PASS: TestImageBuild/serial/Setup (32.71s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-672328
E0929 13:30:38.296633 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
image_test.go:78: (dbg) Done: out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-672328: (1.971503184s)
--- PASS: TestImageBuild/serial/NormalBuild (1.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-672328
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.98s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.87s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-672328
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.87s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-linux-arm64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-672328
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.85s)

                                                
                                    
x
+
TestJSONOutput/start/Command (68.53s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-629468 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-629468 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=docker: (1m8.521383676s)
--- PASS: TestJSONOutput/start/Command (68.53s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.6s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-629468 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.60s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.53s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-629468 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.53s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (10.93s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-629468 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-629468 --output=json --user=testUser: (10.928271212s)
--- PASS: TestJSONOutput/stop/Command (10.93s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-373960 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-373960 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (91.793014ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e3f8604d-24d8-4e36-9f5c-05bf5ca6ff1c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-373960] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"9876794f-47c9-40d9-9cbd-389fc67bfad2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21652"}}
	{"specversion":"1.0","id":"ac1265a8-87cb-47d3-9cb6-4008e8374f02","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"62ff2ab6-f34a-422a-9c64-be32e62c7391","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig"}}
	{"specversion":"1.0","id":"a57b7629-cb06-4731-8617-8bea2637aa2f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube"}}
	{"specversion":"1.0","id":"badf3fe4-a00c-4895-bd53-7f50622d0d15","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"b811b53f-5b8b-465f-95d7-2471c87a520e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8616a359-08d5-4da6-a7dc-1723c34fa6dd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-373960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-373960
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (34.6s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-674841 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-674841 --network=: (32.43239953s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-674841" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-674841
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-674841: (2.132509227s)
--- PASS: TestKicCustomNetwork/create_custom_network (34.60s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (32.9s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-808807 --network=bridge
E0929 13:32:50.245312 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-808807 --network=bridge: (30.884205676s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-808807" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-808807
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-808807: (1.987576472s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (32.90s)

                                                
                                    
x
+
TestKicExistingNetwork (35.93s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0929 13:33:14.281865 1127640 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0929 13:33:14.297564 1127640 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0929 13:33:14.297643 1127640 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0929 13:33:14.297660 1127640 cli_runner.go:164] Run: docker network inspect existing-network
W0929 13:33:14.313071 1127640 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0929 13:33:14.313103 1127640 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0929 13:33:14.313117 1127640 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0929 13:33:14.313220 1127640 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0929 13:33:14.328885 1127640 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-85cc826cc833 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:e6:9d:b6:86:22:ad} reservation:<nil>}
I0929 13:33:14.329181 1127640 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001bfbc70}
I0929 13:33:14.329201 1127640 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0929 13:33:14.329250 1127640 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0929 13:33:14.388135 1127640 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-160160 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-160160 --network=existing-network: (33.812133762s)
helpers_test.go:175: Cleaning up "existing-network-160160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-160160
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-160160: (1.976624024s)
I0929 13:33:50.195176 1127640 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (35.93s)

                                                
                                    
x
+
TestKicCustomSubnet (34.32s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-120829 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-120829 --subnet=192.168.60.0/24: (32.143459399s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-120829 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-120829" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-120829
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-120829: (2.15224754s)
--- PASS: TestKicCustomSubnet (34.32s)

                                                
                                    
x
+
TestKicStaticIP (33.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-101328 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-101328 --static-ip=192.168.200.200: (31.309003118s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-101328 ip
helpers_test.go:175: Cleaning up "static-ip-101328" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-101328
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-101328: (2.114091829s)
--- PASS: TestKicStaticIP (33.58s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (74.19s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-283171 --driver=docker  --container-runtime=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-283171 --driver=docker  --container-runtime=docker: (34.701179577s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-286314 --driver=docker  --container-runtime=docker
E0929 13:35:38.296358 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-286314 --driver=docker  --container-runtime=docker: (33.823197803s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-283171
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-286314
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-286314" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-286314
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-286314: (2.136029068s)
helpers_test.go:175: Cleaning up "first-283171" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-283171
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-283171: (2.128829692s)
--- PASS: TestMinikubeProfile (74.19s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-219200 --memory=3072 --mount-string /tmp/TestMountStartserial656696538/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-219200 --memory=3072 --mount-string /tmp/TestMountStartserial656696538/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (8.120939849s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.12s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-219200 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-221489 --memory=3072 --mount-string /tmp/TestMountStartserial656696538/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker
mount_start_test.go:118: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-221489 --memory=3072 --mount-string /tmp/TestMountStartserial656696538/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=docker: (7.292123819s)
--- PASS: TestMountStart/serial/StartWithMountSecond (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221489 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.28s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-219200 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-219200 --alsologtostderr -v=5: (1.475694771s)
--- PASS: TestMountStart/serial/DeleteFirst (1.48s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221489 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-221489
mount_start_test.go:196: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-221489: (1.212475207s)
--- PASS: TestMountStart/serial/Stop (1.21s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.31s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-221489
mount_start_test.go:207: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-221489: (7.309853971s)
--- PASS: TestMountStart/serial/RestartStopped (8.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-221489 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (70.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792020 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
E0929 13:37:50.246195 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792020 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (1m9.794364721s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (70.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (39.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-792020 -- rollout status deployment/busybox: (4.487628364s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 13:37:58.651739 1127640 retry.go:31] will retry after 949.973582ms: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 13:37:59.744009 1127640 retry.go:31] will retry after 1.081212416s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 13:38:00.978057 1127640 retry.go:31] will retry after 3.010481075s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 13:38:04.141798 1127640 retry.go:31] will retry after 1.951202092s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 13:38:06.255884 1127640 retry.go:31] will retry after 5.440617643s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 13:38:11.843946 1127640 retry.go:31] will retry after 7.556465798s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:514: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
I0929 13:38:19.543192 1127640 retry.go:31] will retry after 11.593350035s: expected 2 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.3'\n\n-- /stdout --"
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bd9kt -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bgm5b -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bd9kt -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bgm5b -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bd9kt -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bgm5b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (39.08s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bd9kt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bd9kt -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bgm5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-792020 -- exec busybox-7b57f96db7-bgm5b -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.01s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (16.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-792020 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-792020 -v=5 --alsologtostderr: (15.68637478s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (16.47s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-792020 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.13s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp testdata/cp-test.txt multinode-792020:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile221216617/001/cp-test_multinode-792020.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020:/home/docker/cp-test.txt multinode-792020-m02:/home/docker/cp-test_multinode-792020_multinode-792020-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m02 "sudo cat /home/docker/cp-test_multinode-792020_multinode-792020-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020:/home/docker/cp-test.txt multinode-792020-m03:/home/docker/cp-test_multinode-792020_multinode-792020-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m03 "sudo cat /home/docker/cp-test_multinode-792020_multinode-792020-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp testdata/cp-test.txt multinode-792020-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile221216617/001/cp-test_multinode-792020-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020-m02:/home/docker/cp-test.txt multinode-792020:/home/docker/cp-test_multinode-792020-m02_multinode-792020.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020 "sudo cat /home/docker/cp-test_multinode-792020-m02_multinode-792020.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020-m02:/home/docker/cp-test.txt multinode-792020-m03:/home/docker/cp-test_multinode-792020-m02_multinode-792020-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m03 "sudo cat /home/docker/cp-test_multinode-792020-m02_multinode-792020-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp testdata/cp-test.txt multinode-792020-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile221216617/001/cp-test_multinode-792020-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020-m03:/home/docker/cp-test.txt multinode-792020:/home/docker/cp-test_multinode-792020-m03_multinode-792020.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020 "sudo cat /home/docker/cp-test_multinode-792020-m03_multinode-792020.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 cp multinode-792020-m03:/home/docker/cp-test.txt multinode-792020-m02:/home/docker/cp-test_multinode-792020-m03_multinode-792020-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 ssh -n multinode-792020-m02 "sudo cat /home/docker/cp-test_multinode-792020-m03_multinode-792020-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-792020 node stop m03: (1.207488699s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792020 status: exit status 7 (527.856399ms)

                                                
                                                
-- stdout --
	multinode-792020
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-792020-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-792020-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr: exit status 7 (547.748618ms)

                                                
                                                
-- stdout --
	multinode-792020
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-792020-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-792020-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:39:04.137972 1300115 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:39:04.138133 1300115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:39:04.138142 1300115 out.go:374] Setting ErrFile to fd 2...
	I0929 13:39:04.138147 1300115 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:39:04.138391 1300115 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:39:04.138579 1300115 out.go:368] Setting JSON to false
	I0929 13:39:04.138616 1300115 mustload.go:65] Loading cluster: multinode-792020
	I0929 13:39:04.138723 1300115 notify.go:220] Checking for updates...
	I0929 13:39:04.139019 1300115 config.go:182] Loaded profile config "multinode-792020": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:39:04.139039 1300115 status.go:174] checking status of multinode-792020 ...
	I0929 13:39:04.139840 1300115 cli_runner.go:164] Run: docker container inspect multinode-792020 --format={{.State.Status}}
	I0929 13:39:04.158163 1300115 status.go:371] multinode-792020 host status = "Running" (err=<nil>)
	I0929 13:39:04.158188 1300115 host.go:66] Checking if "multinode-792020" exists ...
	I0929 13:39:04.158538 1300115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-792020
	I0929 13:39:04.186963 1300115 host.go:66] Checking if "multinode-792020" exists ...
	I0929 13:39:04.187252 1300115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:39:04.187298 1300115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-792020
	I0929 13:39:04.216686 1300115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34063 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/multinode-792020/id_rsa Username:docker}
	I0929 13:39:04.313747 1300115 ssh_runner.go:195] Run: systemctl --version
	I0929 13:39:04.317933 1300115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:39:04.329959 1300115 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0929 13:39:04.410010 1300115 info.go:266] docker info: {ID:6ZPO:QZND:VNGE:LUKL:4Y3K:XELL:AAX4:2GTK:E6LM:MPRN:3ZXR:TTMR Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:62 SystemTime:2025-09-29 13:39:04.400188626 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1084-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214835200 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-30-239 Labels:[] ExperimentalBuild:false ServerVersion:28.1.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx P
ath:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.23.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.35.1]] Warnings:<nil>}}
	I0929 13:39:04.410582 1300115 kubeconfig.go:125] found "multinode-792020" server: "https://192.168.67.2:8443"
	I0929 13:39:04.410627 1300115 api_server.go:166] Checking apiserver status ...
	I0929 13:39:04.410678 1300115 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0929 13:39:04.422706 1300115 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2242/cgroup
	I0929 13:39:04.431837 1300115 api_server.go:182] apiserver freezer: "12:freezer:/docker/e732b8cb85993febe6a8de8f49908b0f788503e6b0ba0aa69efdb7a6fa2c219b/kubepods/burstable/pod0bca96d0e1d53e037df56dc1bba96738/34060a497f64ab41feece32228c3eba475f40d5c76b240fb1075dfdbe1b96901"
	I0929 13:39:04.431907 1300115 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/e732b8cb85993febe6a8de8f49908b0f788503e6b0ba0aa69efdb7a6fa2c219b/kubepods/burstable/pod0bca96d0e1d53e037df56dc1bba96738/34060a497f64ab41feece32228c3eba475f40d5c76b240fb1075dfdbe1b96901/freezer.state
	I0929 13:39:04.440599 1300115 api_server.go:204] freezer state: "THAWED"
	I0929 13:39:04.440628 1300115 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0929 13:39:04.448852 1300115 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0929 13:39:04.448883 1300115 status.go:463] multinode-792020 apiserver status = Running (err=<nil>)
	I0929 13:39:04.448895 1300115 status.go:176] multinode-792020 status: &{Name:multinode-792020 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:39:04.448914 1300115 status.go:174] checking status of multinode-792020-m02 ...
	I0929 13:39:04.449219 1300115 cli_runner.go:164] Run: docker container inspect multinode-792020-m02 --format={{.State.Status}}
	I0929 13:39:04.466647 1300115 status.go:371] multinode-792020-m02 host status = "Running" (err=<nil>)
	I0929 13:39:04.466671 1300115 host.go:66] Checking if "multinode-792020-m02" exists ...
	I0929 13:39:04.466985 1300115 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-792020-m02
	I0929 13:39:04.484175 1300115 host.go:66] Checking if "multinode-792020-m02" exists ...
	I0929 13:39:04.484499 1300115 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0929 13:39:04.484580 1300115 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-792020-m02
	I0929 13:39:04.501248 1300115 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34068 SSHKeyPath:/home/jenkins/minikube-integration/21652-1125775/.minikube/machines/multinode-792020-m02/id_rsa Username:docker}
	I0929 13:39:04.597449 1300115 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0929 13:39:04.609895 1300115 status.go:176] multinode-792020-m02 status: &{Name:multinode-792020-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:39:04.609928 1300115 status.go:174] checking status of multinode-792020-m03 ...
	I0929 13:39:04.610227 1300115 cli_runner.go:164] Run: docker container inspect multinode-792020-m03 --format={{.State.Status}}
	I0929 13:39:04.626746 1300115 status.go:371] multinode-792020-m03 host status = "Stopped" (err=<nil>)
	I0929 13:39:04.626768 1300115 status.go:384] host is not running, skipping remaining checks
	I0929 13:39:04.626775 1300115 status.go:176] multinode-792020-m03 status: &{Name:multinode-792020-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 node start m03 -v=5 --alsologtostderr
E0929 13:39:13.311994 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-792020 node start m03 -v=5 --alsologtostderr: (8.823742286s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (77.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-792020
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-792020
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-792020: (22.701061409s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792020 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792020 --wait=true -v=5 --alsologtostderr: (54.512121615s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-792020
--- PASS: TestMultiNode/serial/RestartKeepsNodes (77.33s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-792020 node delete m03: (5.16927285s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.87s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (21.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 stop
E0929 13:40:38.296630 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-792020 stop: (21.431081914s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792020 status: exit status 7 (92.78208ms)

                                                
                                                
-- stdout --
	multinode-792020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-792020-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr: exit status 7 (102.201977ms)

                                                
                                                
-- stdout --
	multinode-792020
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-792020-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0929 13:40:59.034621 1313481 out.go:360] Setting OutFile to fd 1 ...
	I0929 13:40:59.034751 1313481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:40:59.034760 1313481 out.go:374] Setting ErrFile to fd 2...
	I0929 13:40:59.034765 1313481 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0929 13:40:59.035039 1313481 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21652-1125775/.minikube/bin
	I0929 13:40:59.035223 1313481 out.go:368] Setting JSON to false
	I0929 13:40:59.035275 1313481 mustload.go:65] Loading cluster: multinode-792020
	I0929 13:40:59.035347 1313481 notify.go:220] Checking for updates...
	I0929 13:40:59.036601 1313481 config.go:182] Loaded profile config "multinode-792020": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
	I0929 13:40:59.036629 1313481 status.go:174] checking status of multinode-792020 ...
	I0929 13:40:59.037861 1313481 cli_runner.go:164] Run: docker container inspect multinode-792020 --format={{.State.Status}}
	I0929 13:40:59.058164 1313481 status.go:371] multinode-792020 host status = "Stopped" (err=<nil>)
	I0929 13:40:59.058185 1313481 status.go:384] host is not running, skipping remaining checks
	I0929 13:40:59.058192 1313481 status.go:176] multinode-792020 status: &{Name:multinode-792020 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0929 13:40:59.058224 1313481 status.go:174] checking status of multinode-792020-m02 ...
	I0929 13:40:59.058519 1313481 cli_runner.go:164] Run: docker container inspect multinode-792020-m02 --format={{.State.Status}}
	I0929 13:40:59.084728 1313481 status.go:371] multinode-792020-m02 host status = "Stopped" (err=<nil>)
	I0929 13:40:59.084748 1313481 status.go:384] host is not running, skipping remaining checks
	I0929 13:40:59.084755 1313481 status.go:176] multinode-792020-m02 status: &{Name:multinode-792020-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (21.63s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792020 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792020 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=docker: (54.046023571s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-792020 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.73s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (36.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-792020
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792020-m02 --driver=docker  --container-runtime=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-792020-m02 --driver=docker  --container-runtime=docker: exit status 14 (103.0139ms)

                                                
                                                
-- stdout --
	* [multinode-792020-m02] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-792020-m02' is duplicated with machine name 'multinode-792020-m02' in profile 'multinode-792020'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-792020-m03 --driver=docker  --container-runtime=docker
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-792020-m03 --driver=docker  --container-runtime=docker: (34.148407698s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-792020
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-792020: exit status 80 (355.404199ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-792020 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-792020-m03 already exists in multinode-792020-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-792020-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-792020-m03: (2.15723722s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (36.82s)

                                                
                                    
x
+
TestPreload (152.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-109006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0
E0929 13:42:50.245340 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:43:41.369273 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-109006 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.32.0: (1m20.819935794s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-109006 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-arm64 -p test-preload-109006 image pull gcr.io/k8s-minikube/busybox: (2.273033392s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-109006
preload_test.go:57: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-109006: (10.839390204s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-109006 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker
preload_test.go:65: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-109006 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker: (55.891663306s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-109006 image list
helpers_test.go:175: Cleaning up "test-preload-109006" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-109006
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-109006: (2.172049238s)
--- PASS: TestPreload (152.22s)

                                                
                                    
x
+
TestScheduledStopUnix (106.41s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-101501 --memory=3072 --driver=docker  --container-runtime=docker
E0929 13:45:38.296070 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-101501 --memory=3072 --driver=docker  --container-runtime=docker: (33.180080131s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-101501 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-101501 -n scheduled-stop-101501
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-101501 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0929 13:45:40.593938 1127640 retry.go:31] will retry after 144.719µs: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.595211 1127640 retry.go:31] will retry after 169.099µs: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.595432 1127640 retry.go:31] will retry after 239.839µs: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.596347 1127640 retry.go:31] will retry after 275.513µs: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.597501 1127640 retry.go:31] will retry after 470.425µs: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.598629 1127640 retry.go:31] will retry after 1.084784ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.600826 1127640 retry.go:31] will retry after 813.007µs: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.601963 1127640 retry.go:31] will retry after 2.412824ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.605167 1127640 retry.go:31] will retry after 3.479594ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.609390 1127640 retry.go:31] will retry after 3.392132ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.613619 1127640 retry.go:31] will retry after 6.595344ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.620834 1127640 retry.go:31] will retry after 8.863929ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.630084 1127640 retry.go:31] will retry after 17.757094ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.648766 1127640 retry.go:31] will retry after 12.161348ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.662028 1127640 retry.go:31] will retry after 26.899989ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
I0929 13:45:40.689450 1127640 retry.go:31] will retry after 22.352313ms: open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/scheduled-stop-101501/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-101501 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-101501 -n scheduled-stop-101501
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-101501
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-101501 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-101501
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-101501: exit status 7 (66.048332ms)

                                                
                                                
-- stdout --
	scheduled-stop-101501
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-101501 -n scheduled-stop-101501
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-101501 -n scheduled-stop-101501: exit status 7 (64.340666ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-101501" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-101501
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-101501: (1.655612055s)
--- PASS: TestScheduledStopUnix (106.41s)

                                                
                                    
x
+
TestSkaffold (140.56s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /tmp/skaffold.exe1287337830 version
skaffold_test.go:63: skaffold version: v2.16.1
skaffold_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p skaffold-948073 --memory=3072 --driver=docker  --container-runtime=docker
skaffold_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p skaffold-948073 --memory=3072 --driver=docker  --container-runtime=docker: (30.768417765s)
skaffold_test.go:86: copying out/minikube-linux-arm64 to /home/jenkins/workspace/Docker_Linux_docker_arm64/out/minikube
skaffold_test.go:105: (dbg) Run:  /tmp/skaffold.exe1287337830 run --minikube-profile skaffold-948073 --kube-context skaffold-948073 --status-check=true --port-forward=false --interactive=false
E0929 13:47:50.245463 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
skaffold_test.go:105: (dbg) Done: /tmp/skaffold.exe1287337830 run --minikube-profile skaffold-948073 --kube-context skaffold-948073 --status-check=true --port-forward=false --interactive=false: (1m31.851757855s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:352: "leeroy-app-664554f47f-4r5hh" [b2c79c6c-3b08-44c7-b00d-16b110366581] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003367436s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:352: "leeroy-web-96746d744-vhjv2" [ee2c2f3d-b0a5-4cff-af99-0b271798e399] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.00303527s
helpers_test.go:175: Cleaning up "skaffold-948073" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p skaffold-948073
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p skaffold-948073: (2.959244493s)
--- PASS: TestSkaffold (140.56s)

                                                
                                    
x
+
TestInsufficientStorage (11.5s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-808516 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-808516 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (9.229494098s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"772c4c1b-5641-40b1-be80-ccc032cbc4a6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-808516] minikube v1.37.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"5d3b7357-88d4-4726-ae10-b6f08650549d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21652"}}
	{"specversion":"1.0","id":"524b63f5-29d8-4759-b53f-1e8017c84528","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"1b8d674c-d77b-411e-bb59-a32f979ac905","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig"}}
	{"specversion":"1.0","id":"70ad5054-c97b-4164-aa69-569fd913143c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube"}}
	{"specversion":"1.0","id":"58370c6a-38c3-4246-a7a1-8a6145af96e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"0c78295e-af0e-4925-b1c1-5cfd704372f7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"4adc3154-5d9d-4c52-a009-a7fa3b3c898c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"5a56ecab-199f-4e0e-a469-a113daef151f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"98392f70-4dc5-400f-bafd-25f807e239f9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e4d3b76-7ce9-4e9d-b937-3a867d08829f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"c0a89751-0bef-48a5-bf0b-6b2888b708dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-808516\" primary control-plane node in \"insufficient-storage-808516\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"bc99db1f-0611-47ba-a681-b4bcf55d4eba","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"ca3a53f3-aba7-47ab-bac3-88107cd9fe88","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac0f1ee8-f7f2-4320-a7ba-065dc7d5f609","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-808516 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-808516 --output=json --layout=cluster: exit status 7 (300.988088ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-808516","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-808516","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 13:49:23.385697 1347687 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-808516" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-808516 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-808516 --output=json --layout=cluster: exit status 7 (296.960961ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-808516","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-808516","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0929 13:49:23.683033 1347750 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-808516" does not appear in /home/jenkins/minikube-integration/21652-1125775/kubeconfig
	E0929 13:49:23.693609 1347750 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/insufficient-storage-808516/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-808516" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-808516
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-808516: (1.675177733s)
--- PASS: TestInsufficientStorage (11.50s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (78.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.1972923705 start -p running-upgrade-189830 --memory=3072 --vm-driver=docker  --container-runtime=docker
E0929 13:57:50.245881 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.1972923705 start -p running-upgrade-189830 --memory=3072 --vm-driver=docker  --container-runtime=docker: (41.146887622s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-189830 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-189830 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (33.579569701s)
helpers_test.go:175: Cleaning up "running-upgrade-189830" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-189830
E0929 13:58:59.883254 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-189830: (2.087379356s)
--- PASS: TestRunningBinaryUpgrade (78.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (386.98s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (45.126154078s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-710674
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-710674: (11.088444043s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-710674 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-710674 status --format={{.Host}}: exit status 7 (81.284554ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (4m39.476978627s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-710674 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 106 (150.263162ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-710674] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-710674
	    minikube start -p kubernetes-upgrade-710674 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7106742 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-710674 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-710674 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (48.120829094s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-710674" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-710674
E0929 14:03:59.883674 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-710674: (2.778105528s)
--- PASS: TestKubernetesUpgrade (386.98s)

                                                
                                    
x
+
TestMissingContainerUpgrade (101.26s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.587480022 start -p missing-upgrade-412824 --memory=3072 --driver=docker  --container-runtime=docker
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.587480022 start -p missing-upgrade-412824 --memory=3072 --driver=docker  --container-runtime=docker: (34.213061278s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-412824
E0929 13:56:43.765532 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-412824: (10.537200907s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-412824
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-412824 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-412824 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (51.898013198s)
helpers_test.go:175: Cleaning up "missing-upgrade-412824" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-412824
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-412824: (2.653637147s)
--- PASS: TestMissingContainerUpgrade (101.26s)

                                                
                                    
x
+
TestPause/serial/Start (107.02s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-460160 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker
E0929 13:50:38.296183 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-460160 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (1m47.021096071s)
--- PASS: TestPause/serial/Start (107.02s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763998 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-763998 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=docker: exit status 14 (91.285169ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-763998] minikube v1.37.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=21652
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21652-1125775/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21652-1125775/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (33.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763998 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763998 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (33.49571393s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-763998 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (33.90s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (53.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-460160 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-460160 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (53.046825143s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (53.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763998 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763998 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (17.445098607s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-763998 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-763998 status -o json: exit status 2 (441.389354ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-763998","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-763998
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-763998: (1.996479271s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763998 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763998 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (8.272518972s)
--- PASS: TestNoKubernetes/serial/Start (8.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-763998 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-763998 "sudo systemctl is-active --quiet service kubelet": exit status 1 (298.554601ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.30s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.46s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-763998
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-763998: (1.464498854s)
--- PASS: TestNoKubernetes/serial/Stop (1.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-763998 --driver=docker  --container-runtime=docker
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-763998 --driver=docker  --container-runtime=docker: (8.825404189s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.83s)

                                                
                                    
x
+
TestPause/serial/Pause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-460160 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.79s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.38s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-460160 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-460160 --output=json --layout=cluster: exit status 2 (377.54614ms)

                                                
                                                
-- stdout --
	{"Name":"pause-460160","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-460160","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.38s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.64s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-460160 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.64s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.81s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-460160 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.81s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.35s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-460160 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-460160 --alsologtostderr -v=5: (2.34753766s)
--- PASS: TestPause/serial/DeletePaused (2.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-763998 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-763998 "sudo systemctl is-active --quiet service kubelet": exit status 1 (344.822879ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.34s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-460160
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-460160: exit status 1 (22.471071ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-460160: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.45s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.74s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (88.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2042803110 start -p stopped-upgrade-761566 --memory=3072 --vm-driver=docker  --container-runtime=docker
E0929 13:54:40.879954 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:55:21.841663 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2042803110 start -p stopped-upgrade-761566 --memory=3072 --vm-driver=docker  --container-runtime=docker: (56.524685487s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2042803110 -p stopped-upgrade-761566 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2042803110 -p stopped-upgrade-761566 stop: (10.868192374s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-761566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
E0929 13:55:38.296532 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 13:55:53.314277 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-761566 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (21.284177389s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (88.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-761566
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-761566: (1.115712177s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (78.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker
E0929 13:59:27.608636 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=docker: (1m18.315991149s)
--- PASS: TestNetworkPlugins/group/auto/Start (78.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-212797 "pgrep -a kubelet"
I0929 14:00:20.286184 1127640 config.go:182] Loaded profile config "auto-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-212797 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-k797b" [3b37fa66-0d75-4e14-a706-a4b3670e8cc5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 14:00:21.370856 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-k797b" [3b37fa66-0d75-4e14-a706-a4b3670e8cc5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004171364s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (67.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=docker: (1m7.442932514s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (67.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-7vdgt" [8cd4b553-e4f8-482d-a067-9e14bf7f47f2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005395965s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-212797 "pgrep -a kubelet"
I0929 14:02:06.638906 1127640 config.go:182] Loaded profile config "kindnet-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-212797 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jdld9" [f409ef76-a991-4df9-960e-ef91377f1eef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jdld9" [f409ef76-a991-4df9-960e-ef91377f1eef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.002755995s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (62.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=docker: (1m2.600532876s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (62.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-212797 "pgrep -a kubelet"
I0929 14:05:03.311073 1127640 config.go:182] Loaded profile config "custom-flannel-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-212797 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-tg72w" [b37389c0-4e7b-4728-963a-b4c8770818b1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-tg72w" [b37389c0-4e7b-4728-963a-b4c8770818b1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.002812403s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (81.74s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p false-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker
E0929 14:05:41.061568 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:06:01.543207 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:06:42.513094 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.341063 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.347510 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.360765 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.382159 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.423516 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.504955 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.666543 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:00.987901 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:07:01.630391 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p false-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker  --container-runtime=docker: (1m21.744068003s)
--- PASS: TestNetworkPlugins/group/false/Start (81.74s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p false-212797 "pgrep -a kubelet"
E0929 14:07:02.912363 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I0929 14:07:03.103065 1127640 config.go:182] Loaded profile config "false-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-212797 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-dl5p8" [cd9de92b-b574-425f-b0f4-d1ea32341e0d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 14:07:05.473974 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-dl5p8" [cd9de92b-b574-425f-b0f4-d1ea32341e0d] Running
E0929 14:07:10.595555 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 10.003479101s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker
E0929 14:07:50.245689 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=docker: (1m19.264281238s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (122.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker
E0929 14:08:04.434453 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:08:22.282367 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:08:59.882932 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=docker: (2m2.477270199s)
--- PASS: TestNetworkPlugins/group/flannel/Start (122.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-212797 "pgrep -a kubelet"
I0929 14:09:02.095503 1127640 config.go:182] Loaded profile config "enable-default-cni-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-212797 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-vzksz" [e39c8e07-ae1a-4df6-b799-841c7ab8df79] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-vzksz" [e39c8e07-ae1a-4df6-b799-841c7ab8df79] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.00383564s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (74.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker
E0929 14:09:44.203806 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=docker: (1m14.149776924s)
--- PASS: TestNetworkPlugins/group/bridge/Start (74.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-g628f" [9bbc200f-7d14-49e5-a8e0-ac591ee1b7fe] Running
E0929 14:10:03.684103 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:03.690473 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:03.701763 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:03.722987 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:03.764324 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:03.845650 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:04.007057 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:04.328662 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:04.970588 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:06.251876 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003592543s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-212797 "pgrep -a kubelet"
I0929 14:10:07.710430 1127640 config.go:182] Loaded profile config "flannel-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-212797 replace --force -f testdata/netcat-deployment.yaml
I0929 14:10:08.181243 1127640 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mcgv8" [c1c050f2-3264-4652-ab9d-ee3f5b653981] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0929 14:10:08.813698 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-mcgv8" [c1c050f2-3264-4652-ab9d-ee3f5b653981] Running
E0929 14:10:13.935529 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004123981s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (70.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kubenet-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0929 14:10:44.660089 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:10:48.276331 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kubenet-212797 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m10.877414521s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (70.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-212797 "pgrep -a kubelet"
I0929 14:10:52.620874 1127640 config.go:182] Loaded profile config "bridge-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-212797 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-kdfbv" [ff8be6cd-045f-4172-af4d-3b8655fa3288] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-kdfbv" [ff8be6cd-045f-4172-af4d-3b8655fa3288] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003606475s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (78.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (1m18.227143375s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (78.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kubenet-212797 "pgrep -a kubelet"
I0929 14:11:52.934527 1127640 config.go:182] Loaded profile config "kubenet-212797": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (11.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-212797 replace --force -f testdata/netcat-deployment.yaml
I0929 14:11:53.304213 1127640 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6zmk5" [5c193c4f-71bb-4876-b671-92fb17134e28] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6zmk5" [5c193c4f-71bb-4876-b671-92fb17134e28] Running
E0929 14:12:00.340976 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:03.455689 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:03.461992 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:03.473738 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:03.495202 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:03.536662 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:03.618199 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:03.779805 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:04.102061 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 11.004275369s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (11.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-212797 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-212797 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0929 14:12:04.744020 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)
E0929 14:52:47.162007 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:52:50.245837 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:53:47.929505 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:53:59.883332 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:54:02.358971 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (80.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 14:12:28.045799 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:33.315683 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:12:44.432264 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m20.380444693s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (80.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-062731 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [384d14cd-bc8f-4ea0-b5a3-b8b51ef8223d] Pending
E0929 14:12:47.543688 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [384d14cd-bc8f-4ea0-b5a3-b8b51ef8223d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E0929 14:12:50.246026 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "busybox" [384d14cd-bc8f-4ea0-b5a3-b8b51ef8223d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.00355996s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-062731 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.65s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-062731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-062731 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.48413196s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-062731 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.65s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-062731 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-062731 --alsologtostderr -v=3: (11.223393322s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-062731 -n old-k8s-version-062731
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-062731 -n old-k8s-version-062731: exit status 7 (75.837941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-062731 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (29.97s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0
E0929 14:13:25.393657 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-062731 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.28.0: (29.586211945s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-062731 -n old-k8s-version-062731
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (29.97s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-983174 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [487a452e-f7fd-40fd-97d1-0f85e8b6763c] Pending
helpers_test.go:352: "busybox" [487a452e-f7fd-40fd-97d1-0f85e8b6763c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [487a452e-f7fd-40fd-97d1-0f85e8b6763c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003657876s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-983174 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-983174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-983174 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.018923643s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-983174 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.12s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (10.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-983174 --alsologtostderr -v=3
E0929 14:13:59.883054 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:02.357914 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:02.364293 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:02.375663 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:02.397168 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:02.438813 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:02.520465 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:02.682020 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:03.003999 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:03.645540 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:04.927597 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:07.489994 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-983174 --alsologtostderr -v=3: (10.920674907s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (10.92s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-983174 -n no-preload-983174
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-983174 -n no-preload-983174: exit status 7 (70.753185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-983174 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (52.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 14:14:12.611630 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:22.853722 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:43.335889 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:14:47.314974 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:01.311326 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:01.317786 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:01.329173 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:01.350624 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:01.392062 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:15:01.473785 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-983174 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (52.200115051s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-983174 -n no-preload-983174
E0929 14:15:01.635709 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (52.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-062731 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-062731 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-062731 -n old-k8s-version-062731
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-062731 -n old-k8s-version-062731: exit status 2 (333.84964ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-062731 -n old-k8s-version-062731
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-062731 -n old-k8s-version-062731: exit status 2 (341.115667ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-062731 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-062731 -n old-k8s-version-062731
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-062731 -n old-k8s-version-062731
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 14:31:53.257104 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:00.342650 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kindnet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:03.456078 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/false-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:15.979035 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.162870 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.169280 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.180688 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.202177 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.243550 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.324947 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.486407 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:47.807983 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:48.449515 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:49.730826 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:50.245763 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/functional-085003/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:52.292067 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:32:57.413717 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (1m12.092543292s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-641794 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [04338c6d-75d6-4380-b6a7-7e258e6c7b20] Pending
helpers_test.go:352: "busybox" [04338c6d-75d6-4380-b6a7-7e258e6c7b20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [04338c6d-75d6-4380-b6a7-7e258e6c7b20] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.006228194s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-641794 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.50s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-983174 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.95s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-983174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-983174 -n no-preload-983174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-983174 -n no-preload-983174: exit status 2 (323.958381ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-983174 -n no-preload-983174
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-983174 -n no-preload-983174: exit status 2 (324.309737ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-983174 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-983174 -n no-preload-983174
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-983174 -n no-preload-983174
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (43.5s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (43.499549977s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (43.50s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-641794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0929 14:33:16.322459 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/kubenet-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-641794 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.126568753s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-641794 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-641794 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-641794 --alsologtostderr -v=3: (11.243901708s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-641794 -n embed-certs-641794
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-641794 -n embed-certs-641794: exit status 7 (86.765005ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-641794 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
E0929 14:33:28.137012 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.5s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 14:33:41.374408 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:47.929844 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:47.936142 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:47.947470 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:47.968774 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:48.010143 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:48.091489 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:48.252979 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:48.574500 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:49.216367 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:50.498261 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:33:53.059763 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-641794 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (58.908917708s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-641794 -n embed-certs-641794
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.50s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-093064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0929 14:33:58.181944 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-093064 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.155498284s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.97s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-093064 --alsologtostderr -v=3
E0929 14:33:59.883477 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/skaffold-948073/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:34:02.358527 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/enable-default-cni-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:34:08.423266 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:34:09.099322 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-093064 --alsologtostderr -v=3: (10.974861308s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.97s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-093064 -n newest-cni-093064
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-093064 -n newest-cni-093064: exit status 7 (69.06867ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-093064 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-093064 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (16.428667346s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-093064 -n newest-cni-093064
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.01s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-093064 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-093064 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-093064 --alsologtostderr -v=1: (1.02268025s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-093064 -n newest-cni-093064
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-093064 -n newest-cni-093064: exit status 2 (308.227743ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-093064 -n newest-cni-093064
E0929 14:34:28.904656 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-093064 -n newest-cni-093064: exit status 2 (338.570506ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-093064 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-093064 -n newest-cni-093064
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-093064 -n newest-cni-093064
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 14:35:01.311928 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:35:03.684976 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/custom-flannel-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:35:09.866663 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:35:20.566845 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/auto-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (47.556523198s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (47.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [94471ca6-7ece-4f49-8594-4bfa1557697a] Pending
helpers_test.go:352: "busybox" [94471ca6-7ece-4f49-8594-4bfa1557697a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [94471ca6-7ece-4f49-8594-4bfa1557697a] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003240277s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-186820 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0929 14:35:31.021323 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/old-k8s-version-062731/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-186820 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-186820 --alsologtostderr -v=3
E0929 14:35:38.297157 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/addons-214477/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-186820 --alsologtostderr -v=3: (11.000070563s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820: exit status 7 (89.811736ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-186820 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0
E0929 14:35:52.915967 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/bridge-212797/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0929 14:36:31.788321 1127640 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21652-1125775/.minikube/profiles/no-preload-983174/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-186820 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.34.0: (51.931090968s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (52.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-641794 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-641794 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-641794 -n embed-certs-641794
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-641794 -n embed-certs-641794: exit status 2 (361.598661ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-641794 -n embed-certs-641794
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-641794 -n embed-certs-641794: exit status 2 (338.576033ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-641794 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-641794 -n embed-certs-641794
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-641794 -n embed-certs-641794
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-186820 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-186820 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820: exit status 2 (318.695159ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820: exit status 2 (352.933064ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-186820 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-186820 -n default-k8s-diff-port-186820
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.84s)

                                                
                                    

Test skip (26/341)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.6s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-476121 --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-476121" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-476121
--- SKIP: TestDownloadOnlyKic (0.60s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:759: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.01s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1033: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1792: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-212797 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-212797" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-212797

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-212797" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-212797"

                                                
                                                
----------------------- debugLogs end: cilium-212797 [took: 4.849272285s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-212797" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-212797
--- SKIP: TestNetworkPlugins/group/cilium (5.06s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-627946" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-627946
--- SKIP: TestStartStop/group/disable-driver-mounts (0.17s)

                                                
                                    
Copied to clipboard