Test Report: KVM_Linux_containerd 22230

                    
                      c636a8658fdd5cfdd18416b9a30087c97060a836:2025-12-19:42856
                    
                

Test fail (30/437)

Order failed test Duration
99 TestFunctional/parallel/DashboardCmd 21.37
192 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd 37.69
387 TestISOImage/Binaries/crictl 0
388 TestISOImage/Binaries/curl 0
389 TestISOImage/Binaries/docker 0
390 TestISOImage/Binaries/git 0
391 TestISOImage/Binaries/iptables 0
392 TestISOImage/Binaries/podman 0
393 TestISOImage/Binaries/rsync 0
394 TestISOImage/Binaries/socat 0
395 TestISOImage/Binaries/wget 0
396 TestISOImage/Binaries/VBoxControl 0
397 TestISOImage/Binaries/VBoxService 0
480 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 542.62
481 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 542.62
484 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 542.71
485 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 542.66
486 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 542.71
487 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 542.75
488 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 542.77
489 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.4
497 TestISOImage/PersistentMounts//data 0
498 TestISOImage/PersistentMounts//var/lib/docker 0
499 TestISOImage/PersistentMounts//var/lib/cni 0
500 TestISOImage/PersistentMounts//var/lib/kubelet 0
501 TestISOImage/PersistentMounts//var/lib/minikube 0
502 TestISOImage/PersistentMounts//var/lib/toolbox 0
503 TestISOImage/PersistentMounts//var/lib/boot2docker 0
504 TestISOImage/VersionJSON 0
505 TestISOImage/eBPFSupport 0
x
+
TestFunctional/parallel/DashboardCmd (21.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-991175 --alsologtostderr -v=1] stderr:
I1219 02:37:08.812689   15434 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:08.812816   15434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:08.812824   15434 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:08.812830   15434 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:08.813061   15434 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:08.813277   15434 mustload.go:66] Loading cluster: functional-991175
I1219 02:37:08.813622   15434 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:08.815300   15434 host.go:66] Checking if "functional-991175" exists ...
I1219 02:37:08.815482   15434 api_server.go:166] Checking apiserver status ...
I1219 02:37:08.815515   15434 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:37:08.817535   15434 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.817855   15434 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:08.817879   15434 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.818032   15434 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:08.918396   15434 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5319/cgroup
W1219 02:37:08.930427   15434 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5319/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:37:08.930483   15434 ssh_runner.go:195] Run: ls
I1219 02:37:08.936517   15434 api_server.go:253] Checking apiserver healthz at https://192.168.39.176:8441/healthz ...
I1219 02:37:08.941184   15434 api_server.go:279] https://192.168.39.176:8441/healthz returned 200:
ok
W1219 02:37:08.941217   15434 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:37:08.941354   15434 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:08.941369   15434 addons.go:70] Setting dashboard=true in profile "functional-991175"
I1219 02:37:08.941380   15434 addons.go:239] Setting addon dashboard=true in "functional-991175"
I1219 02:37:08.941399   15434 host.go:66] Checking if "functional-991175" exists ...
I1219 02:37:08.942973   15434 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:37:08.942988   15434 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:37:08.945299   15434 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.945637   15434 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:08.945658   15434 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:08.945802   15434 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:09.055093   15434 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:37:09.059807   15434 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:37:09.063129   15434 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:37:10.336470   15434 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.273308304s)
I1219 02:37:10.336580   15434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:37:14.820687   15434 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (4.484058654s)
I1219 02:37:14.820785   15434 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:37:15.649164   15434 addons.go:500] Verifying addon dashboard=true in "functional-991175"
I1219 02:37:15.652790   15434 out.go:179] * Verifying dashboard addon...
I1219 02:37:15.655233   15434 kapi.go:59] client config for functional-991175: &rest.Config{Host:"https://192.168.39.176:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:15.655883   15434 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:37:15.655910   15434 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:37:15.655919   15434 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:37:15.655926   15434 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:37:15.655931   15434 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:37:15.656386   15434 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:37:15.674426   15434 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:37:15.674457   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:16.174526   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:16.659913   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:17.160607   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:17.670174   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:18.163739   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:18.660387   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:19.161537   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:19.660801   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:20.169384   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:20.659947   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:21.159769   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:21.660888   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:22.160051   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:22.660618   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:23.162383   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:23.659898   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:24.160156   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:24.659348   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:25.160675   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:25.660285   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:26.160718   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:26.662406   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:27.160782   15434 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:37:27.660411   15434 kapi.go:107] duration metric: took 12.004026678s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:37:27.661763   15434 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-991175 addons enable metrics-server

                                                
                                                
I1219 02:37:27.662742   15434 addons.go:202] Writing out "functional-991175" config to set dashboard=true...
W1219 02:37:27.662948   15434 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:37:27.663346   15434 kapi.go:59] client config for functional-991175: &rest.Config{Host:"https://192.168.39.176:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:27.665381   15434 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy  kubernetes-dashboard  89bba8fe-5a12-4399-93bb-62fba77c45b5 914 0 2025-12-19 02:37:14 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:37:14 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:31505,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.96.134.161,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.96.134.161],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:37:27.665529   15434 host.go:66] Checking if "functional-991175" exists ...
I1219 02:37:27.668628   15434 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:27.668975   15434 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:27.669030   15434 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:27.669544   15434 kapi.go:59] client config for functional-991175: &rest.Config{Host:"https://192.168.39.176:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:37:27.676529   15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.679837   15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.688325   15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.692003   15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.873079   15434 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:37:27.932375   15434 out.go:179] * Dashboard Token:
I1219 02:37:27.933631   15434 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6Ik1QSWp4QUJwaGU5bFJPMGNuVWNJZUZGZDVqckx1Y0htYWNzSk1OeHBZMkUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk4MjQ3LCJpYXQiOjE3NjYxMTE4NDcsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMTdmMzhiNGUtYjcwOC00M2UyLWEzYjgtMDY2ZGFmOWZmZGNhIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiMGY5ZDBhMWItMDBkMy00NzdhLTk3M2ItZjhkOWRmOTIzZWNkIn19LCJuYmYiOjE3NjYxMTE4NDcsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.0RMrKt2v6YHAb_ZVDM5h-JigTy5Hl8kYqGmoMD4EC4wyMFkAFxZyRA8Ho5k6fudA6ldti_aDZYjGqn02TDZyKb9cQdApIoEIJOr6RFUPWg9fJ0z_ptZZTXDSMEDCsX0iOpz9mQyL5DTg80yynOoXYS2o4RiBxPG1dF4AiNF7u8_vhiFgCu_gN4ANQqvSrN3HyIAbCujtlpAi47mn7JNLAJPaQIgoxCql4Q1fe8iY5cKwRr-xEhT_vGLfLb4cNFZWtdX1_4JVomZnrUHDnb_h2j8bm3V2E_U9Win9ubnoWo_3QBaQr-Hih1EsY6Swr4W48ISBVDhr1Gkz9YFoIIkMeA
I1219 02:37:27.934860   15434 out.go:203] https://192.168.39.176:31505
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-991175 -n functional-991175
helpers_test.go:253: <<< TestFunctional/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctional/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 logs -n 25
E1219 02:37:28.692065    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 logs -n 25: (1.458053123s)
helpers_test.go:261: TestFunctional/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ update-context │ functional-991175 update-context --alsologtostderr -v=2                                                                           │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ update-context │ functional-991175 update-context --alsologtostderr -v=2                                                                           │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ image          │ functional-991175 image ls --format short --alsologtostderr                                                                       │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ image          │ functional-991175 image ls --format yaml --alsologtostderr                                                                        │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh pgrep buildkitd                                                                                             │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ image          │ functional-991175 image build -t localhost/my-image:functional-991175 testdata/build --alsologtostderr                            │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh stat /mount-9p/created-by-test                                                                              │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh stat /mount-9p/created-by-pod                                                                               │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh sudo umount -f /mount-9p                                                                                    │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ mount          │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdspecific-port1680034923/001:/mount-9p --alsologtostderr -v=1 --port 38769 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ ssh            │ functional-991175 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh -- ls -la /mount-9p                                                                                         │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh sudo umount -f /mount-9p                                                                                    │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ mount          │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount2 --alsologtostderr -v=1                 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ mount          │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount3 --alsologtostderr -v=1                 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ ssh            │ functional-991175 ssh findmnt -T /mount1                                                                                          │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ mount          │ -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount1 --alsologtostderr -v=1                 │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ ssh            │ functional-991175 ssh findmnt -T /mount1                                                                                          │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh findmnt -T /mount2                                                                                          │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ ssh            │ functional-991175 ssh findmnt -T /mount3                                                                                          │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ mount          │ -p functional-991175 --kill=true                                                                                                  │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │                     │
	│ image          │ functional-991175 image ls --format json --alsologtostderr                                                                        │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ image          │ functional-991175 image ls --format table --alsologtostderr                                                                       │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	│ image          │ functional-991175 image ls                                                                                                        │ functional-991175 │ jenkins │ v1.37.0 │ 19 Dec 25 02:37 UTC │ 19 Dec 25 02:37 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:37:08
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:37:08.708923   15417 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:37:08.709025   15417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:37:08.709041   15417 out.go:374] Setting ErrFile to fd 2...
	I1219 02:37:08.709047   15417 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:37:08.709268   15417 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:37:08.709676   15417 out.go:368] Setting JSON to false
	I1219 02:37:08.710481   15417 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1168,"bootTime":1766110661,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:37:08.710534   15417 start.go:143] virtualization: kvm guest
	I1219 02:37:08.712425   15417 out.go:179] * [functional-991175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:37:08.713666   15417 notify.go:221] Checking for updates...
	I1219 02:37:08.713691   15417 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:37:08.714942   15417 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:37:08.716323   15417 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:37:08.718327   15417 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:37:08.722180   15417 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:37:08.723394   15417 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:37:08.724843   15417 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:37:08.725298   15417 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:37:08.754231   15417 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:37:08.755256   15417 start.go:309] selected driver: kvm2
	I1219 02:37:08.755274   15417 start.go:928] validating driver "kvm2" against &{Name:functional-991175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-991175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:37:08.755408   15417 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:37:08.756878   15417 cni.go:84] Creating CNI manager for ""
	I1219 02:37:08.756968   15417 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 02:37:08.757049   15417 start.go:353] cluster config:
	{Name:functional-991175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:functional-991175 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountM
Size:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:37:08.758485   15417 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	118e50785c6ac       59f642f485d26       2 seconds ago        Running             kubernetes-dashboard-web               0                   412772cade7a5       kubernetes-dashboard-web-5c9f966b98-vfznd               kubernetes-dashboard
	bcf56a36a47d3       d9cbc9f4053ca       8 seconds ago        Running             kubernetes-dashboard-metrics-scraper   0                   4039f7831cfa1       kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv   kubernetes-dashboard
	95cc9db4f6b90       04da2b0513cd7       15 seconds ago       Running             myfrontend                             0                   b20095e6b429e       sp-pod                                                  default
	f9b5cfdb430c6       56cc512116c8f       17 seconds ago       Exited              mount-munger                           0                   e22952349e54d       busybox-mount                                           default
	4c716c69c6bc0       9056ab77afb8e       29 seconds ago       Running             echo-server                            0                   102d9d00f939d       hello-node-75c85bcc94-f6w8n                             default
	3eb8137aed9dd       9056ab77afb8e       30 seconds ago       Running             echo-server                            0                   3d99677466fcb       hello-node-connect-7d85dfc575-czl7n                     default
	d5c08603141b5       20d0be4ee4524       32 seconds ago       Running             mysql                                  0                   0c0b388498b01       mysql-6bcdcbc558-554d8                                  default
	58db843b3841b       6e38f40d628db       57 seconds ago       Running             storage-provisioner                    4                   6c4004e62eb0a       storage-provisioner                                     kube-system
	a5e0c8b8a3fab       36eef8e07bdd6       About a minute ago   Running             kube-proxy                             2                   d66adc23ee232       kube-proxy-wdgkq                                        kube-system
	66f1404c482dc       52546a367cc9e       About a minute ago   Running             coredns                                2                   3b91f9a6fcdf1       coredns-66bc5c9577-5qflf                                kube-system
	f648417def1c9       6e38f40d628db       About a minute ago   Exited              storage-provisioner                    3                   6c4004e62eb0a       storage-provisioner                                     kube-system
	3a40fa1b46b6b       aa27095f56193       About a minute ago   Running             kube-apiserver                         0                   477a78fb1310a       kube-apiserver-functional-991175                        kube-system
	af6b518575775       aec12dadf56dd       About a minute ago   Running             kube-scheduler                         2                   875914cb42f8c       kube-scheduler-functional-991175                        kube-system
	aeed4cc1daccc       5826b25d990d7       About a minute ago   Running             kube-controller-manager                3                   a5c62d5fd27ba       kube-controller-manager-functional-991175               kube-system
	da476738e5f1b       a3e246e9556e9       About a minute ago   Running             etcd                                   2                   9c3022414033a       etcd-functional-991175                                  kube-system
	dc8923e751097       5826b25d990d7       2 minutes ago        Exited              kube-controller-manager                2                   a5c62d5fd27ba       kube-controller-manager-functional-991175               kube-system
	348a5c2688f20       a3e246e9556e9       2 minutes ago        Exited              etcd                                   1                   9c3022414033a       etcd-functional-991175                                  kube-system
	429dbf7c0e75e       52546a367cc9e       2 minutes ago        Exited              coredns                                1                   3b91f9a6fcdf1       coredns-66bc5c9577-5qflf                                kube-system
	263987cd3ab41       36eef8e07bdd6       2 minutes ago        Exited              kube-proxy                             1                   d66adc23ee232       kube-proxy-wdgkq                                        kube-system
	bf5b0a6cb75fc       aec12dadf56dd       2 minutes ago        Exited              kube-scheduler                         1                   875914cb42f8c       kube-scheduler-functional-991175                        kube-system
	
	
	==> containerd <==
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.360960648Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod9ea23ba6-0bd8-4e5f-90c6-7037d545eb69/d5c08603141b5e06cee23eedf2cc9e3a085d5d16632a1854542035e4b94b4c1e/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.362143215Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod0238d7b6-85df-4964-a6e6-7fb14714d248/3eb8137aed9ddf5f42db3e19b8d9b731cb62941dcfaf8ae2946ddad9bc1adc96/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.363135736Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod2dd3a376-a910-4065-a877-d5dd5989104c/4c716c69c6bc05ad11ee26a1ab6be7046e638bd0f866dd3a1e701b2a9530df17/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.365475797Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podc1a46824-5cec-4320-8f94-c8c554b0272c/bcf56a36a47d3428bb03eea9a006006029d10c9879b7becb842c7bb0e1774014/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.368626657Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd0b934acd85dea9a21ab1e414d833e00/af6b5185757751a4adb5a14512ee3a542a0c9812feb002595b542a3da537532c/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.369451884Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podeb78920fa8288e3e92099be56b0387ab/3a40fa1b46b6bb4b9a1917f03833d4a5537a27249d495edc09375bf6d2e61fc6/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.370642636Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod8b50289536da016751fda33002b3c6dd/da476738e5f1b95b1f74582bdf07802870ffd176aa25a3c5c77c0c576c35f679/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.371506963Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podaa7a07822daf4df680cd252b6bdb1bb2/aeed4cc1daccc8314e531e5b310d6e30e12fbee865eb49338db7a0ccecf19759/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.372238776Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podebfcc1c4-e2f8-45ef-abde-a764cd68d374/66f1404c482dc20ddc28bc3ba9a6541a9fa56c30b598ee60e1eb96348aa624d3/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.373926021Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod09b44541-6422-4929-9467-c65fe5dd3f86/a5e0c8b8a3fabe401a1d0cfad5f633ae19db07a1ada2a523a07729ff6aab773e/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.374888223Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podf4d8f0f0-a770-4556-8fcb-88c02bcdb4a9/95cc9db4f6b90126964eac00353d31d77bd9350acdd63ddd0b504401299d8771/hugetlb.2MB.events\""
	Dec 19 02:37:23 functional-991175 containerd[4437]: time="2025-12-19T02:37:23.379655075Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod0b827772-7dd9-4175-86bc-0507e1b78055/58db843b3841bb930e38261156b1c8725df9bf507fd7a32a3b854031be81ea26/hugetlb.2MB.events\""
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.580536599Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web:1.7.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.582378355Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard-web:1.7.0: active requests=0, bytes read=62507990"
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.583938206Z" level=info msg="ImageCreate event name:\"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.587919694Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.588871036Z" level=info msg="Pulled image \"docker.io/kubernetesui/dashboard-web:1.7.0\" with image id \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\", repo tag \"docker.io/kubernetesui/dashboard-web:1.7.0\", repo digest \"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\", size \"62497108\" in 6.697931408s"
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.588916963Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-web:1.7.0\" returns image reference \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\""
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.591282972Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-api:1.14.0\""
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.597752944Z" level=info msg="CreateContainer within sandbox \"412772cade7a5d04c80c5d4988ba69d199d63adb0403846d52abab5d4c3f572b\" for container name:\"kubernetes-dashboard-web\""
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.611080862Z" level=info msg="Container 118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36: CDI devices from CRI Config.CDIDevices: []"
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.624870931Z" level=info msg="CreateContainer within sandbox \"412772cade7a5d04c80c5d4988ba69d199d63adb0403846d52abab5d4c3f572b\" for name:\"kubernetes-dashboard-web\" returns container id \"118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36\""
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.625942121Z" level=info msg="StartContainer for \"118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36\""
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.627970258Z" level=info msg="connecting to shim 118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36" address="unix:///run/containerd/s/8d7e6d3b2b29e2bf77a37a37e876f04cbd444e7deb931946284a8d6cfdc4a302" protocol=ttrpc version=3
	Dec 19 02:37:26 functional-991175 containerd[4437]: time="2025-12-19T02:37:26.768281782Z" level=info msg="StartContainer for \"118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36\" returns successfully"
	
	
	==> coredns [429dbf7c0e75ef36df5d65eccdbdbf117c37b5047a36bfd113fbf82e49bd04ce] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53347 - 50564 "HINFO IN 8070156027199287086.8041306130364622304. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017042702s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?resourceVersion=466": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [66f1404c482dc20ddc28bc3ba9a6541a9fa56c30b598ee60e1eb96348aa624d3] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:41979 - 58544 "HINFO IN 5715910185780935871.5693441124122664811. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016268999s
	
	
	==> describe nodes <==
	Name:               functional-991175
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-991175
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-991175
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_34_38_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:34:34 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-991175
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:37:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:37:17 +0000   Fri, 19 Dec 2025 02:34:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:37:17 +0000   Fri, 19 Dec 2025 02:34:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:37:17 +0000   Fri, 19 Dec 2025 02:34:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:37:17 +0000   Fri, 19 Dec 2025 02:34:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.176
	  Hostname:    functional-991175
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001784Ki
	  pods:               110
	System Info:
	  Machine ID:                 1723782ad1a04ba5acbc6b8bdb9df320
	  System UUID:                1723782a-d1a0-4ba5-acbc-6b8bdb9df320
	  Boot ID:                    aeb9bd68-4db2-45f9-95ee-16fc41838eb9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-f6w8n                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         42s
	  default                     hello-node-connect-7d85dfc575-czl7n                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  default                     mysql-6bcdcbc558-554d8                                   600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    50s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         17s
	  kube-system                 coredns-66bc5c9577-5qflf                                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     2m47s
	  kube-system                 etcd-functional-991175                                   100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         2m52s
	  kube-system                 kube-apiserver-functional-991175                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-functional-991175                200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 kube-proxy-wdgkq                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  kube-system                 kube-scheduler-functional-991175                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m52s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m45s
	  kubernetes-dashboard        kubernetes-dashboard-api-5f84cf677c-t95d8                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    15s
	  kubernetes-dashboard        kubernetes-dashboard-auth-75547cbd96-t758q               100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    15s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-5ldxp                0 (0%)        0 (0%)      0 (0%)           0 (0%)         15s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv    100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    15s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-vfznd                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    15s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1750m (87%)   1700m (85%)
	  memory             1482Mi (37%)  2470Mi (63%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m45s                  kube-proxy       
	  Normal  Starting                 71s                    kube-proxy       
	  Normal  Starting                 2m20s                  kube-proxy       
	  Normal  Starting                 2m59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s (x8 over 2m58s)  kubelet          Node functional-991175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m58s (x8 over 2m58s)  kubelet          Node functional-991175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m58s (x7 over 2m58s)  kubelet          Node functional-991175 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m52s                  kubelet          Node functional-991175 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m52s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m52s                  kubelet          Node functional-991175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m52s                  kubelet          Node functional-991175 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m52s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m51s                  kubelet          Node functional-991175 status is now: NodeReady
	  Normal  RegisteredNode           2m48s                  node-controller  Node functional-991175 event: Registered Node functional-991175 in Controller
	  Normal  NodeHasSufficientMemory  2m4s (x8 over 2m4s)    kubelet          Node functional-991175 status is now: NodeHasSufficientMemory
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    2m4s (x8 over 2m4s)    kubelet          Node functional-991175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m4s (x7 over 2m4s)    kubelet          Node functional-991175 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m4s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                   node-controller  Node functional-991175 event: Registered Node functional-991175 in Controller
	  Normal  Starting                 77s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  76s (x8 over 76s)      kubelet          Node functional-991175 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    76s (x8 over 76s)      kubelet          Node functional-991175 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     76s (x7 over 76s)      kubelet          Node functional-991175 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  76s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           70s                    node-controller  Node functional-991175 event: Registered Node functional-991175 in Controller
	
	
	==> dmesg <==
	[  +0.090988] kauditd_printk_skb: 18 callbacks suppressed
	[  +0.123275] kauditd_printk_skb: 171 callbacks suppressed
	[  +5.652153] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.955857] kauditd_printk_skb: 255 callbacks suppressed
	[  +0.123245] kauditd_printk_skb: 44 callbacks suppressed
	[Dec19 02:35] kauditd_printk_skb: 116 callbacks suppressed
	[  +6.585224] kauditd_printk_skb: 60 callbacks suppressed
	[  +6.713571] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.649255] kauditd_printk_skb: 14 callbacks suppressed
	[  +1.860574] kauditd_printk_skb: 20 callbacks suppressed
	[ +12.108279] kauditd_printk_skb: 8 callbacks suppressed
	[  +0.112528] kauditd_printk_skb: 12 callbacks suppressed
	[Dec19 02:36] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.205783] kauditd_printk_skb: 47 callbacks suppressed
	[  +4.195739] kauditd_printk_skb: 75 callbacks suppressed
	[ +12.544167] kauditd_printk_skb: 43 callbacks suppressed
	[  +1.736160] kauditd_printk_skb: 53 callbacks suppressed
	[  +1.814021] kauditd_printk_skb: 91 callbacks suppressed
	[  +6.484366] kauditd_printk_skb: 38 callbacks suppressed
	[  +4.784440] kauditd_printk_skb: 74 callbacks suppressed
	[Dec19 02:37] kauditd_printk_skb: 53 callbacks suppressed
	[  +4.477591] kauditd_printk_skb: 41 callbacks suppressed
	[  +0.682477] crun[8106]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set
	[  +0.270027] kauditd_printk_skb: 180 callbacks suppressed
	[ +10.433789] kauditd_printk_skb: 135 callbacks suppressed
	
	
	==> etcd [348a5c2688f204ad24f7cf5f82189d287519be45d68cfd73cc5ef109ce2d773c] <==
	{"level":"warn","ts":"2025-12-19T02:35:27.479418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:27.487148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60598","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:27.499498Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:27.504216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:27.513617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:27.522432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60664","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:35:27.570621Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60692","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:36:06.601097Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:36:06.601223Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-991175","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
	{"level":"error","ts":"2025-12-19T02:36:06.601331Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:36:06.603143Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:36:06.603231Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:36:06.603258Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f70d523d4475ce3b","current-leader-member-id":"f70d523d4475ce3b"}
	{"level":"warn","ts":"2025-12-19T02:36:06.603276Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:36:06.603337Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:36:06.603356Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:36:06.603342Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.176:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:36:06.603394Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.176:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:36:06.603398Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"error","ts":"2025-12-19T02:36:06.603382Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:36:06.603412Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-19T02:36:06.606564Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"error","ts":"2025-12-19T02:36:06.606619Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.176:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:36:06.606638Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.176:2380"}
	{"level":"info","ts":"2025-12-19T02:36:06.606644Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-991175","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.176:2380"],"advertise-client-urls":["https://192.168.39.176:2379"]}
	
	
	==> etcd [da476738e5f1b95b1f74582bdf07802870ffd176aa25a3c5c77c0c576c35f679] <==
	{"level":"warn","ts":"2025-12-19T02:36:53.229453Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"196.049564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:36:53.229494Z","caller":"traceutil/trace.go:172","msg":"trace[160057841] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:778; }","duration":"196.096933ms","start":"2025-12-19T02:36:53.033391Z","end":"2025-12-19T02:36:53.229488Z","steps":["trace[160057841] 'agreement among raft nodes before linearized reading'  (duration: 196.034311ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:36:53.229624Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"192.725968ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:36:53.229645Z","caller":"traceutil/trace.go:172","msg":"trace[1513768200] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:778; }","duration":"192.748937ms","start":"2025-12-19T02:36:53.036891Z","end":"2025-12-19T02:36:53.229640Z","steps":["trace[1513768200] 'agreement among raft nodes before linearized reading'  (duration: 192.705275ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:36:55.498756Z","caller":"traceutil/trace.go:172","msg":"trace[1291379608] linearizableReadLoop","detail":"{readStateIndex:854; appliedIndex:854; }","duration":"243.800216ms","start":"2025-12-19T02:36:55.254940Z","end":"2025-12-19T02:36:55.498740Z","steps":["trace[1291379608] 'read index received'  (duration: 243.79571ms)","trace[1291379608] 'applied index is now lower than readState.Index'  (duration: 3.81µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:36:55.498883Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"243.928985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:36:55.498902Z","caller":"traceutil/trace.go:172","msg":"trace[1703508812] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:778; }","duration":"243.981317ms","start":"2025-12-19T02:36:55.254916Z","end":"2025-12-19T02:36:55.498897Z","steps":["trace[1703508812] 'agreement among raft nodes before linearized reading'  (duration: 243.897928ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:36:55.499386Z","caller":"traceutil/trace.go:172","msg":"trace[1409033602] transaction","detail":"{read_only:false; response_revision:779; number_of_response:1; }","duration":"261.613207ms","start":"2025-12-19T02:36:55.237764Z","end":"2025-12-19T02:36:55.499377Z","steps":["trace[1409033602] 'process raft request'  (duration: 261.437538ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:36:55.669734Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"141.399397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:36:55.669787Z","caller":"traceutil/trace.go:172","msg":"trace[390670026] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:779; }","duration":"141.459817ms","start":"2025-12-19T02:36:55.528316Z","end":"2025-12-19T02:36:55.669775Z","steps":["trace[390670026] 'range keys from in-memory index tree'  (duration: 141.348936ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:37:11.013626Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"121.99862ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/mysql-6bcdcbc558-554d8\" limit:1 ","response":"range_response_count:1 size:3594"}
	{"level":"info","ts":"2025-12-19T02:37:11.013973Z","caller":"traceutil/trace.go:172","msg":"trace[251922854] range","detail":"{range_begin:/registry/pods/default/mysql-6bcdcbc558-554d8; range_end:; response_count:1; response_revision:828; }","duration":"122.418947ms","start":"2025-12-19T02:37:10.891538Z","end":"2025-12-19T02:37:11.013957Z","steps":["trace[251922854] 'range keys from in-memory index tree'  (duration: 121.211806ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:37:19.832331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:19.880237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:19.912067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.007184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57840","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.027873Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.062407Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.087574Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.099071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.117458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57926","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.132927Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.178825Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57968","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T02:37:20.197970Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57996","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T02:37:26.033317Z","caller":"traceutil/trace.go:172","msg":"trace[486613573] transaction","detail":"{read_only:false; response_revision:1014; number_of_response:1; }","duration":"105.215955ms","start":"2025-12-19T02:37:25.928086Z","end":"2025-12-19T02:37:26.033302Z","steps":["trace[486613573] 'process raft request'  (duration: 105.116822ms)"],"step_count":1}
	
	
	==> kernel <==
	 02:37:29 up 3 min,  0 users,  load average: 2.05, 0.94, 0.37
	Linux functional-991175 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [3a40fa1b46b6bb4b9a1917f03833d4a5537a27249d495edc09375bf6d2e61fc6] <==
	I1219 02:37:11.789955       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:37:11.821031       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:37:11.849847       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:37:11.885430       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:37:11.910926       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:37:11.935153       1 handler.go:285] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:37:14.266374       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:37:14.502190       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.105.193.171"}
	I1219 02:37:14.521545       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.106.178.141"}
	I1219 02:37:14.598414       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.96.134.161"}
	I1219 02:37:14.601418       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.66.74"}
	I1219 02:37:14.635788       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.102.96.133"}
	W1219 02:37:19.823593       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:19.879732       1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:19.910608       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:19.986958       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:20.019109       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:20.062253       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:20.084280       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:37:20.097921       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:20.117477       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:20.131662       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:37:20.175577       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:37:20.197698       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	E1219 02:37:20.718895       1 conn.go:339] Error on socket receive: read tcp 192.168.39.176:8441->192.168.39.1:46740: use of closed network connection
	
	
	==> kube-controller-manager [aeed4cc1daccc8314e531e5b310d6e30e12fbee865eb49338db7a0ccecf19759] <==
	I1219 02:36:19.777386       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 02:36:19.779585       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 02:36:19.779669       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:36:19.783938       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:36:19.791916       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 02:36:19.797826       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 02:36:19.802878       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:36:19.809055       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 02:36:19.811514       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:36:19.821913       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 02:36:19.825956       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 02:36:19.830229       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 02:37:19.801066       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongingresses.configuration.konghq.com"
	I1219 02:37:19.802233       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="tcpingresses.configuration.konghq.com"
	I1219 02:37:19.802509       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumers.configuration.konghq.com"
	I1219 02:37:19.802732       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="udpingresses.configuration.konghq.com"
	I1219 02:37:19.802770       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:37:19.802899       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongplugins.configuration.konghq.com"
	I1219 02:37:19.803059       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:37:19.803281       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:37:19.803410       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:37:19.803822       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1219 02:37:19.871211       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 02:37:21.206139       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 02:37:21.273325       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-controller-manager [dc8923e7510979fd92dafeba69038936e5ec5fedbd8fb9747727a17402df7ab1] <==
	I1219 02:35:31.611085       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 02:35:31.613839       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1219 02:35:31.615851       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 02:35:31.616728       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1219 02:35:31.616773       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 02:35:31.619355       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 02:35:31.619895       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1219 02:35:31.621911       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 02:35:31.623126       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 02:35:31.624677       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1219 02:35:31.624780       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1219 02:35:31.628085       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1219 02:35:31.628342       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 02:35:31.628584       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1219 02:35:31.631100       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1219 02:35:31.632252       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1219 02:35:31.637767       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 02:35:31.638112       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 02:35:31.703589       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:35:31.703626       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 02:35:31.703632       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 02:35:31.716958       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 02:35:33.220275       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	E1219 02:36:01.610613       1 resource_quota_controller.go:446] "Unhandled Error" err="failed to discover resources: Get \"https://192.168.39.176:8441/api\": dial tcp 192.168.39.176:8441: connect: connection refused" logger="UnhandledError"
	I1219 02:36:01.718371       1 garbagecollector.go:789] "failed to discover preferred resources" logger="garbage-collector-controller" error="Get \"https://192.168.39.176:8441/api\": dial tcp 192.168.39.176:8441: connect: connection refused"
	
	
	==> kube-proxy [263987cd3ab41d50eaeffbc947f6d9b9a461a3041001bf0430df62ae1cca1aec] <==
	I1219 02:35:08.159392       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:35:08.261027       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:35:08.261087       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.176"]
	E1219 02:35:08.261353       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:35:08.351709       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:35:08.351957       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:35:08.352325       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:35:08.367232       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:35:08.367783       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:35:08.367798       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:35:08.369122       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:35:08.369306       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:35:08.370171       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:35:08.370277       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:35:08.371030       1 config.go:309] "Starting node config controller"
	I1219 02:35:08.371121       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:35:08.371177       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:35:08.371502       1 config.go:200] "Starting service config controller"
	I1219 02:35:08.371593       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:35:08.469616       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:35:08.470601       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:35:08.471813       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-proxy [a5e0c8b8a3fabe401a1d0cfad5f633ae19db07a1ada2a523a07729ff6aab773e] <==
	I1219 02:36:17.715142       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 02:36:17.815413       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 02:36:17.815605       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.176"]
	E1219 02:36:17.815755       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:36:17.851627       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:36:17.851694       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:36:17.851715       1 server_linux.go:132] "Using iptables Proxier"
	I1219 02:36:17.861041       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:36:17.861382       1 server.go:527] "Version info" version="v1.34.3"
	I1219 02:36:17.861571       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:36:17.866200       1 config.go:200] "Starting service config controller"
	I1219 02:36:17.866212       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:36:17.866229       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:36:17.866233       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:36:17.866242       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:36:17.866245       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:36:17.866871       1 config.go:309] "Starting node config controller"
	I1219 02:36:17.866887       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:36:17.866894       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:36:17.967363       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:36:17.967390       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:36:17.967424       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [af6b5185757751a4adb5a14512ee3a542a0c9812feb002595b542a3da537532c] <==
	I1219 02:36:15.556827       1 serving.go:386] Generated self-signed cert in-memory
	I1219 02:36:16.395890       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 02:36:16.396065       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:36:16.401937       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 02:36:16.402215       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1219 02:36:16.402184       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:36:16.402472       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:36:16.403097       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:36:16.403383       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:36:16.402196       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:36:16.410123       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:36:16.504637       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1219 02:36:16.505172       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:36:16.512856       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kube-scheduler [bf5b0a6cb75fcd042dcd1db080ba4304d922c1440b7a984bb5f23505f353aea9] <==
	E1219 02:35:28.268687       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 02:35:28.269148       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 02:35:28.270188       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 02:35:28.270356       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 02:35:28.270458       1 reflector.go:205] "Failed to watch" err="poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot watch resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 02:35:28.270626       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 02:35:28.270799       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 02:35:28.270918       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 02:35:28.271088       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 02:35:28.271400       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 02:35:28.271519       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 02:35:28.271631       1 reflector.go:205] "Failed to watch" err="storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 02:35:28.273042       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 02:35:28.273275       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 02:35:28.273464       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 02:35:28.273672       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 02:35:28.273954       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1219 02:36:11.789586       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1219 02:36:11.789881       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:36:11.789901       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:36:11.790015       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:36:11.789594       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:36:11.790417       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:36:11.790439       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:36:11.790643       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.862726    5156 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vf2g6\" (UniqueName: \"kubernetes.io/projected/2342ec87-98cc-47ed-9a07-a529f4e36993-kube-api-access-vf2g6\") pod \"2342ec87-98cc-47ed-9a07-a529f4e36993\" (UID: \"2342ec87-98cc-47ed-9a07-a529f4e36993\") "
	Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.862783    5156 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/2342ec87-98cc-47ed-9a07-a529f4e36993-test-volume\") pod \"2342ec87-98cc-47ed-9a07-a529f4e36993\" (UID: \"2342ec87-98cc-47ed-9a07-a529f4e36993\") "
	Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.862863    5156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2342ec87-98cc-47ed-9a07-a529f4e36993-test-volume" (OuterVolumeSpecName: "test-volume") pod "2342ec87-98cc-47ed-9a07-a529f4e36993" (UID: "2342ec87-98cc-47ed-9a07-a529f4e36993"). InnerVolumeSpecName "test-volume". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.865285    5156 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2342ec87-98cc-47ed-9a07-a529f4e36993-kube-api-access-vf2g6" (OuterVolumeSpecName: "kube-api-access-vf2g6") pod "2342ec87-98cc-47ed-9a07-a529f4e36993" (UID: "2342ec87-98cc-47ed-9a07-a529f4e36993"). InnerVolumeSpecName "kube-api-access-vf2g6". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.963160    5156 reconciler_common.go:299] "Volume detached for volume \"test-volume\" (UniqueName: \"kubernetes.io/host-path/2342ec87-98cc-47ed-9a07-a529f4e36993-test-volume\") on node \"functional-991175\" DevicePath \"\""
	Dec 19 02:37:13 functional-991175 kubelet[5156]: I1219 02:37:13.963191    5156 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vf2g6\" (UniqueName: \"kubernetes.io/projected/2342ec87-98cc-47ed-9a07-a529f4e36993-kube-api-access-vf2g6\") on node \"functional-991175\" DevicePath \"\""
	Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.479842    5156 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e22952349e54dd6c4f141857a2c1d2c754e5cda38e99fe2fb137dd96a9d3da9d"
	Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.852866    5156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/sp-pod" podStartSLOduration=2.852846891 podStartE2EDuration="2.852846891s" podCreationTimestamp="2025-12-19 02:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-19 02:37:14.56311055 +0000 UTC m=+61.690211950" watchObservedRunningTime="2025-12-19 02:37:14.852846891 +0000 UTC m=+61.979948290"
	Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.975894    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/c1a46824-5cec-4320-8f94-c8c554b0272c-tmp-volume\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv\" (UID: \"c1a46824-5cec-4320-8f94-c8c554b0272c\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv"
	Dec 19 02:37:14 functional-991175 kubelet[5156]: I1219 02:37:14.975943    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndhqb\" (UniqueName: \"kubernetes.io/projected/c1a46824-5cec-4320-8f94-c8c554b0272c-kube-api-access-ndhqb\") pod \"kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv\" (UID: \"c1a46824-5cec-4320-8f94-c8c554b0272c\") " pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: E1219 02:37:15.005527    5156 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kong-dbless-config\" is forbidden: User \"system:node:functional-991175\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kubernetes-dashboard\": no relationship found between node 'functional-991175' and this object" logger="UnhandledError" reflector="object-\"kubernetes-dashboard\"/\"kong-dbless-config\"" type="*v1.ConfigMap"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.081810    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-prefix-dir\" (UniqueName: \"kubernetes.io/empty-dir/ba8205ec-962f-4440-ab21-0ede40482a03-kubernetes-dashboard-kong-prefix-dir\") pod \"kubernetes-dashboard-kong-9849c64bd-5ldxp\" (UID: \"ba8205ec-962f-4440-ab21-0ede40482a03\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-5ldxp"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.081856    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28rt8\" (UniqueName: \"kubernetes.io/projected/f65681eb-1bb2-4962-8d33-6bea103f525d-kube-api-access-28rt8\") pod \"kubernetes-dashboard-web-5c9f966b98-vfznd\" (UID: \"f65681eb-1bb2-4962-8d33-6bea103f525d\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-vfznd"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082049    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f65681eb-1bb2-4962-8d33-6bea103f525d-tmp-volume\") pod \"kubernetes-dashboard-web-5c9f966b98-vfznd\" (UID: \"f65681eb-1bb2-4962-8d33-6bea103f525d\") " pod="kubernetes-dashboard/kubernetes-dashboard-web-5c9f966b98-vfznd"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082076    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kong-custom-dbless-config-volume\" (UniqueName: \"kubernetes.io/configmap/ba8205ec-962f-4440-ab21-0ede40482a03-kong-custom-dbless-config-volume\") pod \"kubernetes-dashboard-kong-9849c64bd-5ldxp\" (UID: \"ba8205ec-962f-4440-ab21-0ede40482a03\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-5ldxp"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082112    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubernetes-dashboard-kong-tmp\" (UniqueName: \"kubernetes.io/empty-dir/ba8205ec-962f-4440-ab21-0ede40482a03-kubernetes-dashboard-kong-tmp\") pod \"kubernetes-dashboard-kong-9849c64bd-5ldxp\" (UID: \"ba8205ec-962f-4440-ab21-0ede40482a03\") " pod="kubernetes-dashboard/kubernetes-dashboard-kong-9849c64bd-5ldxp"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082127    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/91d01929-6493-44be-bf0c-91123bab0b32-tmp-volume\") pod \"kubernetes-dashboard-auth-75547cbd96-t758q\" (UID: \"91d01929-6493-44be-bf0c-91123bab0b32\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-75547cbd96-t758q"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.082147    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gswrm\" (UniqueName: \"kubernetes.io/projected/91d01929-6493-44be-bf0c-91123bab0b32-kube-api-access-gswrm\") pod \"kubernetes-dashboard-auth-75547cbd96-t758q\" (UID: \"91d01929-6493-44be-bf0c-91123bab0b32\") " pod="kubernetes-dashboard/kubernetes-dashboard-auth-75547cbd96-t758q"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.185212    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2mlj\" (UniqueName: \"kubernetes.io/projected/d35933f0-20c6-4a7d-a25b-9a4e2fe54c23-kube-api-access-x2mlj\") pod \"kubernetes-dashboard-api-5f84cf677c-t95d8\" (UID: \"d35933f0-20c6-4a7d-a25b-9a4e2fe54c23\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-5f84cf677c-t95d8"
	Dec 19 02:37:15 functional-991175 kubelet[5156]: I1219 02:37:15.185298    5156 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/d35933f0-20c6-4a7d-a25b-9a4e2fe54c23-tmp-volume\") pod \"kubernetes-dashboard-api-5f84cf677c-t95d8\" (UID: \"d35933f0-20c6-4a7d-a25b-9a4e2fe54c23\") " pod="kubernetes-dashboard/kubernetes-dashboard-api-5f84cf677c-t95d8"
	Dec 19 02:37:19 functional-991175 kubelet[5156]: I1219 02:37:19.890914    5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
	Dec 19 02:37:19 functional-991175 kubelet[5156]: I1219 02:37:19.891056    5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
	Dec 19 02:37:26 functional-991175 kubelet[5156]: I1219 02:37:26.590851    5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
	Dec 19 02:37:26 functional-991175 kubelet[5156]: I1219 02:37:26.590933    5156 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001784Ki","pods":"110"}
	Dec 19 02:37:27 functional-991175 kubelet[5156]: I1219 02:37:27.597466    5156 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-7685fd8b77-v9kvv" podStartSLOduration=9.765023436 podStartE2EDuration="13.597448488s" podCreationTimestamp="2025-12-19 02:37:14 +0000 UTC" firstStartedPulling="2025-12-19 02:37:16.058269483 +0000 UTC m=+63.185370863" lastFinishedPulling="2025-12-19 02:37:19.890694518 +0000 UTC m=+67.017795915" observedRunningTime="2025-12-19 02:37:20.565630222 +0000 UTC m=+67.692731622" watchObservedRunningTime="2025-12-19 02:37:27.597448488 +0000 UTC m=+74.724549889"
	
	
	==> kubernetes-dashboard [118e50785c6acd1787714e535a959b70c05eacbbab2d1e5e4e63ff209eb1ab36] <==
	I1219 02:37:26.846447       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 02:37:26.846547       1 init.go:48] Using in-cluster config
	I1219 02:37:26.847078       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [bcf56a36a47d3428bb03eea9a006006029d10c9879b7becb842c7bb0e1774014] <==
	I1219 02:37:20.188892       1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
	W1219 02:37:20.189387       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1219 02:37:20.190280       1 main.go:51] Kubernetes host: https://10.96.0.1:443
	I1219 02:37:20.190308       1 main.go:52] Namespace(s): []
	
	
	==> storage-provisioner [58db843b3841bb930e38261156b1c8725df9bf507fd7a32a3b854031be81ea26] <==
	W1219 02:37:03.584728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:05.589856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:05.601703       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:07.607918       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:07.626447       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:09.639867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:09.655444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:11.695164       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:11.717284       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:13.738885       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:13.754059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:15.764402       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:15.778279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:17.789702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:17.798346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:19.805724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:19.840142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:21.844182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:21.855458       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:23.858632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:23.921373       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:25.925485       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:26.036194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:28.041764       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:37:28.047435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f648417def1c9eeda96f53ec693199daa03bf12ee4c0496af5cca782a2a12d59] <==
	I1219 02:36:17.567067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 02:36:17.576771       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-991175 -n functional-991175
helpers_test.go:270: (dbg) Run:  kubectl --context functional-991175 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp
helpers_test.go:283: ======> post-mortem[TestFunctional/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-991175 describe pod busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context functional-991175 describe pod busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp: exit status 1 (78.085752ms)

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-991175/192.168.39.176
	Start Time:       Fri, 19 Dec 2025 02:37:05 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.9
	IPs:
	  IP:  10.244.0.9
	Containers:
	  mount-munger:
	    Container ID:  containerd://f9b5cfdb430c61d58f430163b5939c57c79adc438cf17c0cb8b9375cac63ce46
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:37:11 +0000
	      Finished:     Fri, 19 Dec 2025 02:37:11 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vf2g6 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-vf2g6:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  24s   default-scheduler  Successfully assigned default/busybox-mount to functional-991175
	  Normal  Pulling    24s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     19s   kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 4.768s (4.768s including waiting). Image size: 2395207 bytes.
	  Normal  Created    19s   kubelet            spec.containers{mount-munger}: Created container: mount-munger
	  Normal  Started    19s   kubelet            spec.containers{mount-munger}: Started container mount-munger

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "kubernetes-dashboard-api-5f84cf677c-t95d8" not found
	Error from server (NotFound): pods "kubernetes-dashboard-auth-75547cbd96-t758q" not found
	Error from server (NotFound): pods "kubernetes-dashboard-kong-9849c64bd-5ldxp" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context functional-991175 describe pod busybox-mount kubernetes-dashboard-api-5f84cf677c-t95d8 kubernetes-dashboard-auth-75547cbd96-t758q kubernetes-dashboard-kong-9849c64bd-5ldxp: exit status 1
--- FAIL: TestFunctional/parallel/DashboardCmd (21.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (37.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-509202 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-509202 --alsologtostderr -v=1] ...
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-509202 --alsologtostderr -v=1] stdout:
functional_test.go:925: (dbg) [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-509202 --alsologtostderr -v=1] stderr:
I1219 02:40:34.156824   17575 out.go:360] Setting OutFile to fd 1 ...
I1219 02:40:34.156978   17575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:34.156993   17575 out.go:374] Setting ErrFile to fd 2...
I1219 02:40:34.156999   17575 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:40:34.157343   17575 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:40:34.157703   17575 mustload.go:66] Loading cluster: functional-509202
I1219 02:40:34.158245   17575 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:40:34.160667   17575 host.go:66] Checking if "functional-509202" exists ...
I1219 02:40:34.160880   17575 api_server.go:166] Checking apiserver status ...
I1219 02:40:34.160917   17575 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1219 02:40:34.163677   17575 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:40:34.164185   17575 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:40:34.164219   17575 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:40:34.164380   17575 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-509202/id_rsa Username:docker}
I1219 02:40:34.275563   17575 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5413/cgroup
W1219 02:40:34.288502   17575 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5413/cgroup: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1219 02:40:34.288553   17575 ssh_runner.go:195] Run: ls
I1219 02:40:34.293833   17575 api_server.go:253] Checking apiserver healthz at https://192.168.39.198:8441/healthz ...
I1219 02:40:34.300188   17575 api_server.go:279] https://192.168.39.198:8441/healthz returned 200:
ok
W1219 02:40:34.300245   17575 out.go:285] * Enabling dashboard ...
* Enabling dashboard ...
I1219 02:40:34.300390   17575 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:40:34.300403   17575 addons.go:70] Setting dashboard=true in profile "functional-509202"
I1219 02:40:34.300420   17575 addons.go:239] Setting addon dashboard=true in "functional-509202"
I1219 02:40:34.300442   17575 host.go:66] Checking if "functional-509202" exists ...
I1219 02:40:34.301915   17575 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:40:34.301936   17575 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
I1219 02:40:34.305710   17575 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:40:34.306206   17575 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:40:34.306247   17575 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:40:34.306429   17575 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-509202/id_rsa Username:docker}
I1219 02:40:34.417193   17575 ssh_runner.go:195] Run: test -f /usr/bin/helm
I1219 02:40:34.422561   17575 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
I1219 02:40:34.427867   17575 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
I1219 02:40:35.552198   17575 ssh_runner.go:235] Completed: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh": (1.1242963s)
I1219 02:40:35.552313   17575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
I1219 02:40:38.964195   17575 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.411831317s)
I1219 02:40:38.964303   17575 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
I1219 02:40:39.349436   17575 addons.go:500] Verifying addon dashboard=true in "functional-509202"
I1219 02:40:39.352568   17575 out.go:179] * Verifying dashboard addon...
I1219 02:40:39.354654   17575 kapi.go:59] client config for functional-509202: &rest.Config{Host:"https://192.168.39.198:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:40:39.355286   17575 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I1219 02:40:39.355307   17575 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I1219 02:40:39.355314   17575 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I1219 02:40:39.355321   17575 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I1219 02:40:39.355328   17575 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I1219 02:40:39.355715   17575 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
I1219 02:40:39.378064   17575 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
I1219 02:40:39.378094   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:39.861058   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:40.360393   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:40.860663   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:41.363901   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:41.864446   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:42.366401   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:42.862386   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:43.360342   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:43.858920   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:44.359778   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:44.858907   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:45.360120   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:45.864698   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:46.359345   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:46.859020   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:47.359361   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:47.859505   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:48.359311   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:48.859993   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:49.359669   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:49.941028   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:50.360309   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:50.861268   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:51.359332   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:51.862668   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:52.453970   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:52.860810   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:53.364976   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:53.861133   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:54.361194   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:54.859223   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:55.359959   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:55.864160   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:56.359793   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:56.858961   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:57.360207   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:57.860779   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:58.360908   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:58.861616   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:59.365082   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:40:59.858891   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:00.359199   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:00.860720   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:01.361130   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:01.859525   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:02.359656   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:02.859845   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:03.359214   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:03.859982   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:04.359716   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:04.860311   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:05.359379   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:05.861536   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:06.358663   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:06.859262   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:07.360990   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:07.883077   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:08.359420   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:08.858989   17575 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
I1219 02:41:09.359622   17575 kapi.go:107] duration metric: took 30.00390932s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
I1219 02:41:09.361114   17575 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:

                                                
                                                
	minikube -p functional-509202 addons enable metrics-server

                                                
                                                
I1219 02:41:09.362385   17575 addons.go:202] Writing out "functional-509202" config to set dashboard=true...
W1219 02:41:09.362724   17575 out.go:285] * Verifying dashboard health ...
* Verifying dashboard health ...
I1219 02:41:09.363124   17575 kapi.go:59] client config for functional-509202: &rest.Config{Host:"https://192.168.39.198:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:41:09.365897   17575 service.go:215] Found service: &Service{ObjectMeta:{kubernetes-dashboard-kong-proxy  kubernetes-dashboard  e3dfd74b-cb27-45b6-8f36-d1934f3414fe 781 0 2025-12-19 02:40:38 +0000 UTC <nil> <nil> map[app.kubernetes.io/instance:kubernetes-dashboard app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:kong app.kubernetes.io/version:3.9 enable-metrics:true helm.sh/chart:kong-2.52.0] map[meta.helm.sh/release-name:kubernetes-dashboard meta.helm.sh/release-namespace:kubernetes-dashboard] [] [] [{helm Update v1 2025-12-19 02:40:38 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:app.kubernetes.io/version":{},"f:enable-metrics":{},"f:helm.sh/chart":{}}},"f:spec":{"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".
":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:kong-proxy-tls,Protocol:TCP,Port:443,TargetPort:{0 8443 },NodePort:30270,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/component: app,app.kubernetes.io/instance: kubernetes-dashboard,app.kubernetes.io/name: kong,},ClusterIP:10.102.181.162,Type:NodePort,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.102.181.162],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,TrafficDistribution:nil,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
I1219 02:41:09.366108   17575 host.go:66] Checking if "functional-509202" exists ...
I1219 02:41:09.369379   17575 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:09.369840   17575 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:41:09.369876   17575 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:09.370580   17575 kapi.go:59] client config for functional-509202: &rest.Config{Host:"https://192.168.39.198:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt", KeyFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.key", CAFile:"/home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Next
Protos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2863880), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
I1219 02:41:09.377704   17575 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:41:09.382623   17575 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:41:09.387211   17575 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:41:09.391517   17575 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:41:09.574177   17575 warnings.go:110] "Warning: v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice"
I1219 02:41:09.627102   17575 out.go:179] * Dashboard Token:
I1219 02:41:09.628217   17575 out.go:203] eyJhbGciOiJSUzI1NiIsImtpZCI6ImVCdDFWY2pwVEpyZmRxZ2tkTVYwbDNTNVh4Rm16ajduX0tBRDVWVW5UNncifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNzY2MTk4NDY5LCJpYXQiOjE3NjYxMTIwNjksImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwianRpIjoiMGJkYjEwOGItYTcxYi00ZjgxLWJjYTQtZmZiYzFjNDRkZWNhIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZWVmNjBjNDUtZTFmNy00NmUyLThhOGItZWMzN2QxN2U0ODQ3In19LCJuYmYiOjE3NjYxMTIwNjksInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.Ft0x22qlKFk79ptLN_I1lwWQpB8IbJ95EkvrUOCHBaUio8JO7vvsjRSZoPwzy_3m97IaEpTqBjJvLFm_e6SiGSGacKnuqU98HMprGgT7qQGSVR4lNjAQ6oyjLcD8pTPk4mL6R725dUEtCAiHDtr3hbX3JsFFZrTnB37b2ccWTtfHYAUEYe14t6fLa3nZXyUf0r_85Dz6LLneu4VX4YdZ_Svz8Me0NAb4mBY6kP-adUqGeEhxvPyTHDSD5ic_L83_ej6fjNURZbRAkwH53HRWErhiwQxUa06PP04RS1nusqLXudXOyqdg0i6IHNIQc17c_YgSzGUeTr6e37s3saQmoA
I1219 02:41:09.629217   17575 out.go:203] https://192.168.39.198:30270
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-509202 -n functional-509202
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p functional-509202 logs -n 25: (1.426857963s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ functional-509202 ssh findmnt -T /mount2                                                                                                                           │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh findmnt -T /mount3                                                                                                                           │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ mount   │ -p functional-509202 --kill=true                                                                                                                                   │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │                     │
	│ service │ functional-509202 service hello-node-connect --url                                                                                                                 │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo systemctl is-active docker                                                                                                              │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │                     │
	│ ssh     │ functional-509202 ssh sudo systemctl is-active crio                                                                                                                │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │                     │
	│ image   │ functional-509202 image load --daemon kicbase/echo-server:functional-509202 --alsologtostderr                                                                      │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image ls                                                                                                                                         │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image load --daemon kicbase/echo-server:functional-509202 --alsologtostderr                                                                      │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image ls                                                                                                                                         │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image load --daemon kicbase/echo-server:functional-509202 --alsologtostderr                                                                      │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image ls                                                                                                                                         │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image save kicbase/echo-server:functional-509202 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image rm kicbase/echo-server:functional-509202 --alsologtostderr                                                                                 │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image ls                                                                                                                                         │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr                                       │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image ls                                                                                                                                         │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ image   │ functional-509202 image save --daemon kicbase/echo-server:functional-509202 --alsologtostderr                                                                      │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo cat /etc/test/nested/copy/8978/hosts                                                                                                    │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo cat /etc/ssl/certs/8978.pem                                                                                                             │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo cat /usr/share/ca-certificates/8978.pem                                                                                                 │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                           │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo cat /etc/ssl/certs/89782.pem                                                                                                            │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo cat /usr/share/ca-certificates/89782.pem                                                                                                │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	│ ssh     │ functional-509202 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                           │ functional-509202 │ jenkins │ v1.37.0 │ 19 Dec 25 02:40 UTC │ 19 Dec 25 02:40 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:40:33
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:40:33.694422   17463 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:40:33.694561   17463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:40:33.694571   17463 out.go:374] Setting ErrFile to fd 2...
	I1219 02:40:33.694577   17463 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:40:33.694789   17463 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:40:33.695282   17463 out.go:368] Setting JSON to false
	I1219 02:40:33.696203   17463 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1373,"bootTime":1766110661,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:40:33.696258   17463 start.go:143] virtualization: kvm guest
	I1219 02:40:33.697858   17463 out.go:179] * [functional-509202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:40:33.698989   17463 notify.go:221] Checking for updates...
	I1219 02:40:33.699023   17463 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:40:33.700301   17463 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:40:33.701389   17463 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:40:33.703319   17463 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:40:33.704481   17463 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:40:33.705581   17463 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:40:33.707333   17463 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 02:40:33.708063   17463 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:40:33.748116   17463 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:40:33.749965   17463 start.go:309] selected driver: kvm2
	I1219 02:40:33.749985   17463 start.go:928] validating driver "kvm2" against &{Name:functional-509202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-509202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:40:33.750120   17463 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:40:33.751050   17463 cni.go:84] Creating CNI manager for ""
	I1219 02:40:33.751119   17463 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 02:40:33.751162   17463 start.go:353] cluster config:
	{Name:functional-509202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:functional-509202 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mount
IP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:40:33.753053   17463 out.go:179] * dry-run validation complete!
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	46c9bce3872ef       59f642f485d26       2 seconds ago        Running             kubernetes-dashboard-web               0                   2e34b8a6c8629       kubernetes-dashboard-web-7f7574785f-nl2kk               kubernetes-dashboard
	fb448daf2304e       dd54374d0ab14       8 seconds ago        Running             kubernetes-dashboard-auth              0                   5dcd24beb8126       kubernetes-dashboard-auth-74bd94fcc6-9nrlh              kubernetes-dashboard
	1ed389d958c50       d9cbc9f4053ca       11 seconds ago       Running             kubernetes-dashboard-metrics-scraper   0                   b3a380be21a22       kubernetes-dashboard-metrics-scraper-594bbfb84b-5rlk6   kubernetes-dashboard
	c2c8610095818       a0607af4fcd8a       15 seconds ago       Running             kubernetes-dashboard-api               0                   1524432c0117c       kubernetes-dashboard-api-d4d95dc96-qxp6j                kubernetes-dashboard
	298cbf307538f       3a975970da2f5       17 seconds ago       Running             proxy                                  0                   d9782faed7123       kubernetes-dashboard-kong-78b7499b45-28ltp              kubernetes-dashboard
	5f4214168fc6c       3a975970da2f5       18 seconds ago       Exited              clear-stale-pid                        0                   d9782faed7123       kubernetes-dashboard-kong-78b7499b45-28ltp              kubernetes-dashboard
	7c5f5cf30668f       9056ab77afb8e       27 seconds ago       Running             echo-server                            0                   2fa93016dcdba       hello-node-connect-9f67c86d4-7j4b8                      default
	95d408e35e38a       56cc512116c8f       30 seconds ago       Exited              mount-munger                           0                   41b3dfd484fc1       busybox-mount                                           default
	6372c3741a379       6e38f40d628db       46 seconds ago       Running             storage-provisioner                    4                   a11a6a0b9f040       storage-provisioner                                     kube-system
	5921ef5d06403       aa5e3ebc0dfed       55 seconds ago       Running             coredns                                2                   7b77608e0d8fa       coredns-7d764666f9-l27rd                                kube-system
	e9e4b7d15bb31       58865405a13bc       About a minute ago   Running             kube-apiserver                         0                   aec198fe38fa2       kube-apiserver-functional-509202                        kube-system
	d9eaa4b2a623a       73f80cdc073da       About a minute ago   Running             kube-scheduler                         2                   355d7b8a9d395       kube-scheduler-functional-509202                        kube-system
	72e85101c23ff       5032a56602e1b       About a minute ago   Running             kube-controller-manager                3                   ec66238e70cd7       kube-controller-manager-functional-509202               kube-system
	14414d0a9abb1       af0321f3a4f38       About a minute ago   Running             kube-proxy                             2                   7786fb6a301b7       kube-proxy-lvgq5                                        kube-system
	bcf2fb43aa9ed       6e38f40d628db       About a minute ago   Exited              storage-provisioner                    3                   a11a6a0b9f040       storage-provisioner                                     kube-system
	f2ed193f52053       0a108f7189562       About a minute ago   Running             etcd                                   2                   a704a89c417f2       etcd-functional-509202                                  kube-system
	f416ec73f110a       5032a56602e1b       About a minute ago   Exited              kube-controller-manager                2                   ec66238e70cd7       kube-controller-manager-functional-509202               kube-system
	fc2a8a796981d       0a108f7189562       2 minutes ago        Exited              etcd                                   1                   a704a89c417f2       etcd-functional-509202                                  kube-system
	533d414b1d9cd       af0321f3a4f38       2 minutes ago        Exited              kube-proxy                             1                   7786fb6a301b7       kube-proxy-lvgq5                                        kube-system
	b013af0f28c32       aa5e3ebc0dfed       2 minutes ago        Exited              coredns                                1                   7b77608e0d8fa       coredns-7d764666f9-l27rd                                kube-system
	55efacdb617f1       73f80cdc073da       2 minutes ago        Exited              kube-scheduler                         1                   355d7b8a9d395       kube-scheduler-functional-509202                        kube-system
	
	
	==> containerd <==
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.036515511Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda00f28b0-dfcc-4542-a08d-13a5e32eef4f/298cbf307538f2882a2c20edf117d708e910fec9d4d08c3fd8369ad5a9779059/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.037637248Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podaac0262a-a347-4688-bdfa-20d79ced6334/c2c8610095818e557802b26915abc0e79e7465978b9c724f68222eb1f91d644e/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.038804419Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod0c19810e-5c2b-4250-a3ca-c1d536df7ce7/1ed389d958c505a064e22ee7c1fe1d47ca2956dfb3baecbedaffda8735d3d2da/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.044196887Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf2fb8331-697a-4b97-b740-6cb8d4eea15b/5921ef5d0640312e1d32b94409ced548f6ba625acf2c82bc8285270fb68b6b0e/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.045044600Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podabb282a272e5136d2fb4147b25483038/d9eaa4b2a623ac71c2408fc19ef69c9aab4795fb0adeac2e8157d5eafdca44a7/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.045842440Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podf9568c50-86ac-42e8-b9dc-dc91c5140945/14414d0a9abb12f7fc877a73d589009b0b260c9af990c5b6a2c4ee9941e9399b/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.046412894Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod179f5106-59fa-4a4d-93bc-bbd707ec6f17/6372c3741a37944144328ca44a89c01e06da43452eae542d3f9eb70be3995dde/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.047224279Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3c69636168545d9108c3bd7a285e9472/72e85101c23ff5e305710756575c8960d09ad1fe57025c029b56a5997c61fe30/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.048720115Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod218029457207f4d370d0ec3b84208bd7/e9e4b7d15bb316fb843f6503325fceb332380f462c52d49d28c86f8a007b8053/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.049543078Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod4ebabee6-861b-4953-8ee9-d92d121c6246/7c5f5cf30668f1997f01f20b786738dd7b51bceb154a8fb6ba139c839bd39442/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.050515030Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4afdff29d06473614dd918ae75d3ac59/f2ed193f5205340e64d2ff90be61a06f808de03028866d08b293c8a4914930d8/hugetlb.2MB.events\""
	Dec 19 02:41:06 functional-509202 containerd[4478]: time="2025-12-19T02:41:06.052894376Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podebd4690e-1d36-425e-8020-a7700aa7cb5e/fb448daf2304e9b8fc29bd186fa5e136b7ae7321518446f91aaf0c6b219eb208/hugetlb.2MB.events\""
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.426023200Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web:1.7.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.427345157Z" level=info msg="stop pulling image docker.io/kubernetesui/dashboard-web:1.7.0: active requests=0, bytes read=62507989"
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.428595590Z" level=info msg="ImageCreate event name:\"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.432019944Z" level=info msg="ImageCreate event name:\"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}"
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.433029744Z" level=info msg="Pulled image \"docker.io/kubernetesui/dashboard-web:1.7.0\" with image id \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\", repo tag \"docker.io/kubernetesui/dashboard-web:1.7.0\", repo digest \"docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d\", size \"62497108\" in 6.325791893s"
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.433055790Z" level=info msg="PullImage \"docker.io/kubernetesui/dashboard-web:1.7.0\" returns image reference \"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06\""
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.434596496Z" level=info msg="PullImage \"public.ecr.aws/nginx/nginx:alpine\""
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.442755059Z" level=info msg="CreateContainer within sandbox \"2e34b8a6c86295579dc2e084fb0a3eb64cabb81da473da3e462523d33856fcd2\" for container name:\"kubernetes-dashboard-web\""
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.450867711Z" level=info msg="Container 46c9bce3872efe851f85b7325b2d33938c1e33433791d197d92eada9b2a16fe2: CDI devices from CRI Config.CDIDevices: []"
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.479972321Z" level=info msg="CreateContainer within sandbox \"2e34b8a6c86295579dc2e084fb0a3eb64cabb81da473da3e462523d33856fcd2\" for name:\"kubernetes-dashboard-web\" returns container id \"46c9bce3872efe851f85b7325b2d33938c1e33433791d197d92eada9b2a16fe2\""
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.481527257Z" level=info msg="StartContainer for \"46c9bce3872efe851f85b7325b2d33938c1e33433791d197d92eada9b2a16fe2\""
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.482807906Z" level=info msg="connecting to shim 46c9bce3872efe851f85b7325b2d33938c1e33433791d197d92eada9b2a16fe2" address="unix:///run/containerd/s/86eb81c5417a95369b0289278053036b21a592706f18d189c445e2d4374c6f4f" protocol=ttrpc version=3
	Dec 19 02:41:08 functional-509202 containerd[4478]: time="2025-12-19T02:41:08.617242193Z" level=info msg="StartContainer for \"46c9bce3872efe851f85b7325b2d33938c1e33433791d197d92eada9b2a16fe2\" returns successfully"
	
	
	==> coredns [5921ef5d0640312e1d32b94409ced548f6ba625acf2c82bc8285270fb68b6b0e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:60815 - 21328 "HINFO IN 8463068357624675460.4992561081720110466. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.015809872s
	
	
	==> coredns [b013af0f28c324f1f3e6c268548c5b26b375cba6ca113d716c765dbfb15bd32c] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 680cec097987c24242735352e9de77b2ba657caea131666c4002607b6f81fb6322fe6fa5c2d434be3fcd1251845cd6b7641e3a08a7d3b88486730de31a010646
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:59385 - 54292 "HINFO IN 7458527073063138393.3006518217127408045. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.142670246s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-509202
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-509202
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=functional-509202
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T02_38_05_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 02:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-509202
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 02:41:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 02:41:09 +0000   Fri, 19 Dec 2025 02:37:59 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 02:41:09 +0000   Fri, 19 Dec 2025 02:37:59 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 02:41:09 +0000   Fri, 19 Dec 2025 02:37:59 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 02:41:09 +0000   Fri, 19 Dec 2025 02:38:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.198
	  Hostname:    functional-509202
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             4001788Ki
	  pods:               110
	System Info:
	  Machine ID:                 94146fbd9c3c4383b5d35935f2a63158
	  System UUID:                94146fbd-9c3c-4383-b5d3-5935f2a63158
	  Boot ID:                    2e46ae2b-781c-456a-8616-2af361f38566
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (16 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-5758569b79-6qsvn                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  default                     hello-node-connect-9f67c86d4-7j4b8                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         36s
	  default                     mysql-7d7b65bc95-qhht8                                   600m (30%)    700m (35%)  512Mi (13%)      700Mi (17%)    11s
	  default                     sp-pod                                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 coredns-7d764666f9-l27rd                                 100m (5%)     0 (0%)      70Mi (1%)        170Mi (4%)     3m1s
	  kube-system                 etcd-functional-509202                                   100m (5%)     0 (0%)      100Mi (2%)       0 (0%)         3m7s
	  kube-system                 kube-apiserver-functional-509202                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         62s
	  kube-system                 kube-controller-manager-functional-509202                200m (10%)    0 (0%)      0 (0%)           0 (0%)         3m6s
	  kube-system                 kube-proxy-lvgq5                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 kube-scheduler-functional-509202                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         3m7s
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m
	  kubernetes-dashboard        kubernetes-dashboard-api-d4d95dc96-qxp6j                 100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    32s
	  kubernetes-dashboard        kubernetes-dashboard-auth-74bd94fcc6-9nrlh               100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    32s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-28ltp               0 (0%)        0 (0%)      0 (0%)           0 (0%)         32s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-5rlk6    100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    32s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-nl2kk                100m (5%)     250m (12%)  200Mi (5%)       400Mi (10%)    32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1750m (87%)   1700m (85%)
	  memory             1482Mi (37%)  2470Mi (63%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  3m2s  node-controller  Node functional-509202 event: Registered Node functional-509202 in Controller
	  Normal  RegisteredNode  107s  node-controller  Node functional-509202 event: Registered Node functional-509202 in Controller
	  Normal  RegisteredNode  59s   node-controller  Node functional-509202 event: Registered Node functional-509202 in Controller
	
	
	==> dmesg <==
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.083228] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.101976] kauditd_printk_skb: 102 callbacks suppressed
	[Dec19 02:38] kauditd_printk_skb: 199 callbacks suppressed
	[  +0.000024] kauditd_printk_skb: 18 callbacks suppressed
	[  +4.832277] kauditd_printk_skb: 290 callbacks suppressed
	[  +5.224483] kauditd_printk_skb: 6 callbacks suppressed
	[  +0.913755] kauditd_printk_skb: 84 callbacks suppressed
	[  +5.067902] kauditd_printk_skb: 13 callbacks suppressed
	[Dec19 02:39] kauditd_printk_skb: 54 callbacks suppressed
	[  +9.669432] kauditd_printk_skb: 22 callbacks suppressed
	[  +3.079737] kauditd_printk_skb: 43 callbacks suppressed
	[  +0.117735] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.033289] kauditd_printk_skb: 107 callbacks suppressed
	[Dec19 02:40] kauditd_printk_skb: 55 callbacks suppressed
	[  +3.937429] kauditd_printk_skb: 65 callbacks suppressed
	[  +2.953180] kauditd_printk_skb: 3 callbacks suppressed
	[  +8.971802] kauditd_printk_skb: 33 callbacks suppressed
	[  +5.050026] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.390476] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.419896] kauditd_printk_skb: 116 callbacks suppressed
	[  +1.944074] kauditd_printk_skb: 232 callbacks suppressed
	[  +6.392248] kauditd_printk_skb: 38 callbacks suppressed
	[  +6.929389] kauditd_printk_skb: 48 callbacks suppressed
	[Dec19 02:41] kauditd_printk_skb: 74 callbacks suppressed
	
	
	==> etcd [f2ed193f5205340e64d2ff90be61a06f808de03028866d08b293c8a4914930d8] <==
	{"level":"info","ts":"2025-12-19T02:40:00.701117Z","caller":"etcdserver/server.go:1820","msg":"published local member to cluster through raft","local-member-id":"f1d2ab5330a2a0e3","local-member-attributes":"{Name:functional-509202 ClientURLs:[https://192.168.39.198:2379]}","cluster-id":"9fb372ad12afeb1b","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T02:40:00.701125Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:40:00.701305Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:40:00.702144Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T02:40:00.702182Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:40:00.704858Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:40:00.704941Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:40:00.708930Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:40:00.709569Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"warn","ts":"2025-12-19T02:40:47.696464Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.575139ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:40:47.696729Z","caller":"traceutil/trace.go:172","msg":"trace[2023269187] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:915; }","duration":"110.87355ms","start":"2025-12-19T02:40:47.585842Z","end":"2025-12-19T02:40:47.696716Z","steps":["trace[2023269187] 'range keys from in-memory index tree'  (duration: 110.526525ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:40:49.825045Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"302.785375ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.198\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-12-19T02:40:49.825115Z","caller":"traceutil/trace.go:172","msg":"trace[2023113192] range","detail":"{range_begin:/registry/masterleases/192.168.39.198; range_end:; response_count:1; response_revision:918; }","duration":"302.863029ms","start":"2025-12-19T02:40:49.522242Z","end":"2025-12-19T02:40:49.825105Z","steps":["trace[2023113192] 'range keys from in-memory index tree'  (duration: 302.681702ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:40:49.825142Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:40:49.522227Z","time spent":"302.908511ms","remote":"127.0.0.1:55610","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":1,"response size":159,"request content":"key:\"/registry/masterleases/192.168.39.198\" limit:1 "}
	{"level":"warn","ts":"2025-12-19T02:40:49.825519Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"239.047188ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:40:49.825557Z","caller":"traceutil/trace.go:172","msg":"trace[1015132149] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:918; }","duration":"239.089399ms","start":"2025-12-19T02:40:49.586459Z","end":"2025-12-19T02:40:49.825549Z","steps":["trace[1015132149] 'range keys from in-memory index tree'  (duration: 238.9051ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:40:51.536396Z","caller":"traceutil/trace.go:172","msg":"trace[1725533001] transaction","detail":"{read_only:false; response_revision:920; number_of_response:1; }","duration":"108.773186ms","start":"2025-12-19T02:40:51.427611Z","end":"2025-12-19T02:40:51.536384Z","steps":["trace[1725533001] 'process raft request'  (duration: 108.698949ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:40:52.432058Z","caller":"traceutil/trace.go:172","msg":"trace[1718699288] transaction","detail":"{read_only:false; response_revision:924; number_of_response:1; }","duration":"365.224223ms","start":"2025-12-19T02:40:52.066817Z","end":"2025-12-19T02:40:52.432041Z","steps":["trace[1718699288] 'process raft request'  (duration: 365.04678ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:40:52.434879Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T02:40:52.066791Z","time spent":"366.818624ms","remote":"127.0.0.1:55878","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":13857,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp\" mod_revision:848 > success:<request_put:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp\" value_size:13771 >> failure:<request_range:<key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp\" > >"}
	{"level":"info","ts":"2025-12-19T02:40:52.432318Z","caller":"traceutil/trace.go:172","msg":"trace[1121218852] linearizableReadLoop","detail":"{readStateIndex:1011; appliedIndex:1011; }","duration":"251.379482ms","start":"2025-12-19T02:40:52.180926Z","end":"2025-12-19T02:40:52.432305Z","steps":["trace[1121218852] 'read index received'  (duration: 250.833145ms)","trace[1121218852] 'applied index is now lower than readState.Index'  (duration: 545.315µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T02:40:52.432541Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"251.710384ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:40:52.435092Z","caller":"traceutil/trace.go:172","msg":"trace[2077293871] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:924; }","duration":"254.274728ms","start":"2025-12-19T02:40:52.180810Z","end":"2025-12-19T02:40:52.435085Z","steps":["trace[2077293871] 'agreement among raft nodes before linearized reading'  (duration: 251.535516ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T02:40:52.435181Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"167.255675ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T02:40:52.435196Z","caller":"traceutil/trace.go:172","msg":"trace[271497301] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:924; }","duration":"167.273251ms","start":"2025-12-19T02:40:52.267918Z","end":"2025-12-19T02:40:52.435191Z","steps":["trace[271497301] 'agreement among raft nodes before linearized reading'  (duration: 167.2453ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T02:41:07.868740Z","caller":"traceutil/trace.go:172","msg":"trace[1082797332] transaction","detail":"{read_only:false; response_revision:989; number_of_response:1; }","duration":"198.753551ms","start":"2025-12-19T02:41:07.669966Z","end":"2025-12-19T02:41:07.868719Z","steps":["trace[1082797332] 'process raft request'  (duration: 198.618664ms)"],"step_count":1}
	
	
	==> etcd [fc2a8a796981d70b29cadd47059f1d6947296c19d98ac72ccda4eaf2039a6507] <==
	{"level":"info","ts":"2025-12-19T02:39:07.700141Z","caller":"embed/serve.go:138","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T02:39:07.700521Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T02:39:07.700556Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T02:39:07.701098Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:39:07.702216Z","caller":"v3rpc/health.go:63","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-12-19T02:39:07.703488Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.198:2379"}
	{"level":"info","ts":"2025-12-19T02:39:07.704140Z","caller":"embed/serve.go:283","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T02:39:59.364129Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-12-19T02:39:59.364573Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-509202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	{"level":"error","ts":"2025-12-19T02:39:59.364814Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:39:59.368831Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-12-19T02:39:59.368981Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:39:59.369075Z","caller":"etcdserver/server.go:1297","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"f1d2ab5330a2a0e3","current-leader-member-id":"f1d2ab5330a2a0e3"}
	{"level":"info","ts":"2025-12-19T02:39:59.369139Z","caller":"etcdserver/server.go:2358","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-12-19T02:39:59.369169Z","caller":"etcdserver/server.go:2335","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-12-19T02:39:59.369303Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:39:59.369361Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:39:59.369420Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-12-19T02:39:59.369457Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-12-19T02:39:59.369470Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.198:2379: use of closed network connection"}
	{"level":"error","ts":"2025-12-19T02:39:59.369477Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.198:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:39:59.372984Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"error","ts":"2025-12-19T02:39:59.373023Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.39.198:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-12-19T02:39:59.373139Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.39.198:2380"}
	{"level":"info","ts":"2025-12-19T02:39:59.373149Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-509202","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.198:2380"],"advertise-client-urls":["https://192.168.39.198:2379"]}
	
	
	==> kernel <==
	 02:41:10 up 3 min,  0 users,  load average: 1.45, 0.73, 0.30
	Linux functional-509202 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [e9e4b7d15bb316fb843f6503325fceb332380f462c52d49d28c86f8a007b8053] <==
	I1219 02:40:36.417580       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager
	I1219 02:40:36.436024       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:40:36.442794       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:40:36.457183       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager
	I1219 02:40:36.467006       1 handler.go:304] Adding GroupVersion configuration.konghq.com v1 to ResourceManager
	I1219 02:40:38.734078       1 controller.go:667] quota admission added evaluator for: namespaces
	I1219 02:40:38.826359       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-auth" clusterIPs={"IPv4":"10.101.199.123"}
	I1219 02:40:38.839992       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-api" clusterIPs={"IPv4":"10.99.179.121"}
	I1219 02:40:38.847234       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper" clusterIPs={"IPv4":"10.109.136.220"}
	I1219 02:40:38.867280       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-web" clusterIPs={"IPv4":"10.101.62.37"}
	I1219 02:40:38.871452       1 alloc.go:329] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard-kong-proxy" clusterIPs={"IPv4":"10.102.181.162"}
	W1219 02:40:41.569847       1 logging.go:55] [core] [Channel #262 SubChannel #263]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.603966       1 logging.go:55] [core] [Channel #266 SubChannel #267]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:40:41.657853       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.700911       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.737498       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.782863       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.800978       1 logging.go:55] [core] [Channel #286 SubChannel #287]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W1219 02:40:41.818206       1 logging.go:55] [core] [Channel #290 SubChannel #291]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.830978       1 logging.go:55] [core] [Channel #294 SubChannel #295]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.853616       1 logging.go:55] [core] [Channel #298 SubChannel #299]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.874291       1 logging.go:55] [core] [Channel #302 SubChannel #303]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1219 02:40:41.888061       1 logging.go:55] [core] [Channel #306 SubChannel #307]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	I1219 02:40:45.174944       1 alloc.go:329] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.104.32.49"}
	I1219 02:40:59.259067       1 alloc.go:329] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.102.17.250"}
	
	
	==> kube-controller-manager [72e85101c23ff5e305710756575c8960d09ad1fe57025c029b56a5997c61fe30] <==
	I1219 02:40:11.485910       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.486110       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.486162       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.486485       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.481210       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.481167       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.497336       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.565526       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.577119       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:11.577162       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 02:40:11.577166       1 garbagecollector.go:169] "Proceeding to collect garbage"
	I1219 02:40:11.904768       1 endpointslice_controller.go:361] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="EndpointSlice informer cache is out of date"
	I1219 02:40:41.524858       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongingresses.configuration.konghq.com"
	I1219 02:40:41.525138       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="tcpingresses.configuration.konghq.com"
	I1219 02:40:41.525523       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumers.configuration.konghq.com"
	I1219 02:40:41.525864       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongplugins.configuration.konghq.com"
	I1219 02:40:41.526631       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongcustomentities.configuration.konghq.com"
	I1219 02:40:41.526747       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongconsumergroups.configuration.konghq.com"
	I1219 02:40:41.527341       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="ingressclassparameterses.configuration.konghq.com"
	I1219 02:40:41.527517       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="udpingresses.configuration.konghq.com"
	I1219 02:40:41.527842       1 resource_quota_monitor.go:228] "QuotaMonitor created object count evaluator" resource="kongupstreampolicies.configuration.konghq.com"
	I1219 02:40:41.528807       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:40:41.590863       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:40:42.929508       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:42.991134       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-controller-manager [f416ec73f110a770a61962c167097919f0c84bf4dc765305a9c9bd2c499583a8] <==
	I1219 02:39:23.183900       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.184050       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.184141       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.184313       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.184150       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.186006       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.186457       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.186762       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" zone=""
	I1219 02:39:23.186834       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" node="functional-509202"
	I1219 02:39:23.187806       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I1219 02:39:23.187261       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.186895       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.188014       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.188031       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.187299       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.188379       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.188519       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.188882       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.188919       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.189260       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.189514       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.189835       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.190198       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.201903       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:23.249952       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-proxy [14414d0a9abb12f7fc877a73d589009b0b260c9af990c5b6a2c4ee9941e9399b] <==
	I1219 02:40:00.469873       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:40:08.471168       1 shared_informer.go:377] "Caches are synced"
	I1219 02:40:08.471195       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.198"]
	E1219 02:40:08.471272       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:40:08.511332       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:40:08.514675       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:40:08.514898       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:40:08.524830       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:40:08.525460       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:40:08.525747       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:40:08.528230       1 config.go:200] "Starting service config controller"
	I1219 02:40:08.528365       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:40:08.529418       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:40:08.529490       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:40:08.529515       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:40:08.529528       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:40:08.535011       1 config.go:309] "Starting node config controller"
	I1219 02:40:08.535071       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:40:08.535088       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 02:40:08.628892       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:40:08.630173       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 02:40:08.630558       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [533d414b1d9cd66e7dee8202514a366f000dc6b9c1d3beb0fa92487021f42991] <==
	I1219 02:39:10.503063       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:10.503098       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.39.198"]
	E1219 02:39:10.503278       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 02:39:10.536864       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 02:39:10.536907       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 02:39:10.536927       1 server_linux.go:136] "Using iptables Proxier"
	I1219 02:39:10.546430       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 02:39:10.546898       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 02:39:10.547162       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:39:10.551441       1 config.go:200] "Starting service config controller"
	I1219 02:39:10.551468       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 02:39:10.551483       1 config.go:106] "Starting endpoint slice config controller"
	I1219 02:39:10.551486       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 02:39:10.551757       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 02:39:10.551780       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 02:39:10.552151       1 config.go:309] "Starting node config controller"
	I1219 02:39:10.552202       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 02:39:10.552288       1 shared_informer.go:356] "Caches are synced" controller="node config"
	E1219 02:39:10.552962       1 event_broadcaster.go:279] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8441/apis/events.k8s.io/v1/namespaces/default/events\": dial tcp 192.168.39.198:8441: connect: connection refused"
	I1219 02:39:20.051718       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 02:39:20.051881       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 02:39:24.852632       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [55efacdb617f15b5da2599aaa9c5382f339a67329d19b1e18ed304831a2fa015] <==
	I1219 02:39:09.094990       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 02:39:09.095026       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:39:09.113269       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:39:09.114339       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1219 02:39:09.114490       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:39:09.114612       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 02:39:09.116576       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:39:09.120086       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:39:09.116587       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:39:09.120359       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:39:09.215849       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:09.221017       1 shared_informer.go:377] "Caches are synced"
	I1219 02:39:09.221147       1 shared_informer.go:377] "Caches are synced"
	E1219 02:39:19.978330       1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1219 02:39:19.979782       1 reflector.go:204] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 02:39:19.979812       1 reflector.go:204] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1219 02:39:19.987494       1 reflector.go:204] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	I1219 02:40:04.567579       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1219 02:40:04.567827       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:40:04.567980       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1219 02:40:04.568003       1 requestheader_controller.go:194] Shutting down RequestHeaderAuthRequestController
	I1219 02:40:04.568587       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1219 02:40:04.568798       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1219 02:40:04.568913       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1219 02:40:04.569177       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d9eaa4b2a623ac71c2408fc19ef69c9aab4795fb0adeac2e8157d5eafdca44a7] <==
	I1219 02:40:06.684592       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 02:40:06.687097       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 02:40:06.687135       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 02:40:06.687997       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 02:40:06.688225       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 02:40:08.393027       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 02:40:08.393544       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 02:40:08.393632       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 02:40:08.393722       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 02:40:08.393784       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 02:40:08.393798       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 02:40:08.393877       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 02:40:08.393900       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 02:40:08.394055       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 02:40:08.394107       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 02:40:08.394199       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 02:40:08.394218       1 reflector.go:204] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.DeviceClass"
	E1219 02:40:08.395327       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StorageClass"
	E1219 02:40:08.395366       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 02:40:08.395417       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 02:40:08.395483       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 02:40:08.395524       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 02:40:08.399313       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 02:40:08.420348       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 02:40:11.389291       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 02:40:41 functional-509202 kubelet[5279]: I1219 02:40:41.988159    5279 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b3dfd484fc1d4011ff6f1e4ccffdd59e58a79c6e152ce7f6625ada555bb26c"
	Dec 19 02:40:43 functional-509202 kubelet[5279]: I1219 02:40:43.020621    5279 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="default/hello-node-connect-9f67c86d4-7j4b8" podStartSLOduration=1.75963864 podStartE2EDuration="9.020605441s" podCreationTimestamp="2025-12-19 02:40:34 +0000 UTC" firstStartedPulling="2025-12-19 02:40:35.24338273 +0000 UTC m=+29.615268456" lastFinishedPulling="2025-12-19 02:40:42.504349519 +0000 UTC m=+36.876235257" observedRunningTime="2025-12-19 02:40:43.018825275 +0000 UTC m=+37.390711013" watchObservedRunningTime="2025-12-19 02:40:43.020605441 +0000 UTC m=+37.392491181"
	Dec 19 02:40:43 functional-509202 kubelet[5279]: I1219 02:40:43.080799    5279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-glt8r\" (UniqueName: \"kubernetes.io/projected/2a3a1137-421e-412f-a243-e03747220300-kube-api-access-glt8r\") pod \"sp-pod\" (UID: \"2a3a1137-421e-412f-a243-e03747220300\") " pod="default/sp-pod"
	Dec 19 02:40:43 functional-509202 kubelet[5279]: I1219 02:40:43.080847    5279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-4b760375-2a80-4969-a12b-40191d549668\" (UniqueName: \"kubernetes.io/host-path/2a3a1137-421e-412f-a243-e03747220300-pvc-4b760375-2a80-4969-a12b-40191d549668\") pod \"sp-pod\" (UID: \"2a3a1137-421e-412f-a243-e03747220300\") " pod="default/sp-pod"
	Dec 19 02:40:45 functional-509202 kubelet[5279]: I1219 02:40:45.300287    5279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncrzz\" (UniqueName: \"kubernetes.io/projected/a22ce170-646c-4cc4-8519-e8e61424112b-kube-api-access-ncrzz\") pod \"hello-node-5758569b79-6qsvn\" (UID: \"a22ce170-646c-4cc4-8519-e8e61424112b\") " pod="default/hello-node-5758569b79-6qsvn"
	Dec 19 02:40:52 functional-509202 kubelet[5279]: E1219 02:40:52.051839    5279 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp" containerName="proxy"
	Dec 19 02:40:53 functional-509202 kubelet[5279]: E1219 02:40:53.059532    5279 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp" containerName="proxy"
	Dec 19 02:40:54 functional-509202 kubelet[5279]: E1219 02:40:54.070807    5279 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp" containerName="proxy"
	Dec 19 02:40:55 functional-509202 kubelet[5279]: E1219 02:40:55.077819    5279 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp" containerName="proxy"
	Dec 19 02:40:55 functional-509202 kubelet[5279]: I1219 02:40:55.383154    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	Dec 19 02:40:55 functional-509202 kubelet[5279]: I1219 02:40:55.383243    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	Dec 19 02:40:56 functional-509202 kubelet[5279]: I1219 02:40:56.104997    5279 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp" podStartSLOduration=4.671461595 podStartE2EDuration="18.104983081s" podCreationTimestamp="2025-12-19 02:40:38 +0000 UTC" firstStartedPulling="2025-12-19 02:40:39.627768317 +0000 UTC m=+33.999654045" lastFinishedPulling="2025-12-19 02:40:53.061289804 +0000 UTC m=+47.433175531" observedRunningTime="2025-12-19 02:40:54.092345866 +0000 UTC m=+48.464231624" watchObservedRunningTime="2025-12-19 02:40:56.104983081 +0000 UTC m=+50.476868827"
	Dec 19 02:40:58 functional-509202 kubelet[5279]: I1219 02:40:58.711620    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	Dec 19 02:40:58 functional-509202 kubelet[5279]: I1219 02:40:58.711755    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	Dec 19 02:40:59 functional-509202 kubelet[5279]: E1219 02:40:59.104008    5279 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-5rlk6" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:40:59 functional-509202 kubelet[5279]: I1219 02:40:59.127430    5279 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-5rlk6" podStartSLOduration=4.076401104 podStartE2EDuration="21.127376877s" podCreationTimestamp="2025-12-19 02:40:38 +0000 UTC" firstStartedPulling="2025-12-19 02:40:41.660220477 +0000 UTC m=+36.032106204" lastFinishedPulling="2025-12-19 02:40:58.711196239 +0000 UTC m=+53.083081977" observedRunningTime="2025-12-19 02:40:59.126197179 +0000 UTC m=+53.498082919" watchObservedRunningTime="2025-12-19 02:40:59.127376877 +0000 UTC m=+53.499262623"
	Dec 19 02:40:59 functional-509202 kubelet[5279]: I1219 02:40:59.127542    5279 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-api-d4d95dc96-qxp6j" podStartSLOduration=6.5857183169999995 podStartE2EDuration="21.127534124s" podCreationTimestamp="2025-12-19 02:40:38 +0000 UTC" firstStartedPulling="2025-12-19 02:40:40.840922197 +0000 UTC m=+35.212807923" lastFinishedPulling="2025-12-19 02:40:55.382738003 +0000 UTC m=+49.754623730" observedRunningTime="2025-12-19 02:40:56.106064397 +0000 UTC m=+50.477950126" watchObservedRunningTime="2025-12-19 02:40:59.127534124 +0000 UTC m=+53.499419871"
	Dec 19 02:40:59 functional-509202 kubelet[5279]: I1219 02:40:59.415333    5279 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-479kh\" (UniqueName: \"kubernetes.io/projected/9b8f48cb-e51c-4347-b0bd-31ec403f0a8c-kube-api-access-479kh\") pod \"mysql-7d7b65bc95-qhht8\" (UID: \"9b8f48cb-e51c-4347-b0bd-31ec403f0a8c\") " pod="default/mysql-7d7b65bc95-qhht8"
	Dec 19 02:41:00 functional-509202 kubelet[5279]: E1219 02:41:00.109046    5279 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-5rlk6" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 02:41:02 functional-509202 kubelet[5279]: I1219 02:41:02.109453    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	Dec 19 02:41:02 functional-509202 kubelet[5279]: I1219 02:41:02.110191    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	Dec 19 02:41:03 functional-509202 kubelet[5279]: I1219 02:41:03.136524    5279 pod_startup_latency_tracker.go:108] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-auth-74bd94fcc6-9nrlh" podStartSLOduration=4.720704306 podStartE2EDuration="25.136512538s" podCreationTimestamp="2025-12-19 02:40:38 +0000 UTC" firstStartedPulling="2025-12-19 02:40:41.691282436 +0000 UTC m=+36.063168161" lastFinishedPulling="2025-12-19 02:41:02.107090667 +0000 UTC m=+56.478976393" observedRunningTime="2025-12-19 02:41:03.136174106 +0000 UTC m=+57.508059851" watchObservedRunningTime="2025-12-19 02:41:03.136512538 +0000 UTC m=+57.508398284"
	Dec 19 02:41:05 functional-509202 kubelet[5279]: E1219 02:41:05.083724    5279 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-28ltp" containerName="proxy"
	Dec 19 02:41:08 functional-509202 kubelet[5279]: I1219 02:41:08.435033    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	Dec 19 02:41:08 functional-509202 kubelet[5279]: I1219 02:41:08.435082    5279 kubelet_resources.go:64] "Allocatable" allocatable={"cpu":"2","ephemeral-storage":"17734596Ki","hugepages-2Mi":"0","memory":"4001788Ki","pods":"110"}
	
	
	==> kubernetes-dashboard [1ed389d958c505a064e22ee7c1fe1d47ca2956dfb3baecbedaffda8735d3d2da] <==
	I1219 02:40:59.000448       1 main.go:43] "Starting Metrics Scraper" version="1.2.2"
	W1219 02:40:59.000543       1 client_config.go:667] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
	I1219 02:40:59.001009       1 main.go:51] Kubernetes host: https://10.96.0.1:443
	I1219 02:40:59.001038       1 main.go:52] Namespace(s): []
	
	
	==> kubernetes-dashboard [46c9bce3872efe851f85b7325b2d33938c1e33433791d197d92eada9b2a16fe2] <==
	I1219 02:41:08.693618       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 02:41:08.693760       1 init.go:48] Using in-cluster config
	I1219 02:41:08.694049       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [c2c8610095818e557802b26915abc0e79e7465978b9c724f68222eb1f91d644e] <==
	I1219 02:40:55.660509       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 02:40:55.660590       1 init.go:49] Using in-cluster config
	I1219 02:40:55.660917       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 02:40:55.660926       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 02:40:55.660930       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 02:40:55.660935       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 02:40:55.706955       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 02:40:55.707004       1 client.go:265] Creating in-cluster Sidecar client
	I1219 02:40:55.723602       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 02:40:55.728827       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> kubernetes-dashboard [fb448daf2304e9b8fc29bd186fa5e136b7ae7321518446f91aaf0c6b219eb208] <==
	I1219 02:41:02.371777       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 02:41:02.371885       1 init.go:49] Using in-cluster config
	I1219 02:41:02.372061       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [6372c3741a37944144328ca44a89c01e06da43452eae542d3f9eb70be3995dde] <==
	W1219 02:40:45.383497       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:47.389388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:47.407117       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:49.411058       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:49.419239       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:51.424845       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:51.538444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:53.543934       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:53.555282       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:55.570887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:55.598794       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:57.608136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:57.612840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:59.615884       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:40:59.620907       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:01.626673       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:01.639532       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:03.643160       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:03.648235       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:05.653624       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:05.662185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:07.666935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:07.873090       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:09.882384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 02:41:09.896931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [bcf2fb43aa9ed67ecef48be45c8ab72779ba7c42bc89153ae6304d37441c76d8] <==
	I1219 02:40:00.296181       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 02:40:00.302159       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-509202 -n functional-509202
helpers_test.go:270: (dbg) Run:  kubectl --context functional-509202 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: busybox-mount hello-node-5758569b79-6qsvn mysql-7d7b65bc95-qhht8 sp-pod
helpers_test.go:283: ======> post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context functional-509202 describe pod busybox-mount hello-node-5758569b79-6qsvn mysql-7d7b65bc95-qhht8 sp-pod
helpers_test.go:291: (dbg) kubectl --context functional-509202 describe pod busybox-mount hello-node-5758569b79-6qsvn mysql-7d7b65bc95-qhht8 sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-509202/192.168.39.198
	Start Time:       Fri, 19 Dec 2025 02:40:34 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.6
	IPs:
	  IP:  10.244.0.6
	Containers:
	  mount-munger:
	    Container ID:  containerd://95d408e35e38a3fb1ff80cb3aa1216f8d23879649a900549bb92934fc6d3e7c9
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Dec 2025 02:40:39 +0000
	      Finished:     Fri, 19 Dec 2025 02:40:39 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-wsl2w (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-wsl2w:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  37s   default-scheduler  Successfully assigned default/busybox-mount to functional-509202
	  Normal  Pulling    36s   kubelet            spec.containers{mount-munger}: Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     32s   kubelet            spec.containers{mount-munger}: Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 4.602s (4.602s including waiting). Image size: 2395207 bytes.
	  Normal  Created    32s   kubelet            spec.containers{mount-munger}: Container created
	  Normal  Started    32s   kubelet            spec.containers{mount-munger}: Container started
	
	
	Name:             hello-node-5758569b79-6qsvn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-509202/192.168.39.198
	Start Time:       Fri, 19 Dec 2025 02:40:45 +0000
	Labels:           app=hello-node
	                  pod-template-hash=5758569b79
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/hello-node-5758569b79
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ncrzz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-ncrzz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  26s   default-scheduler  Successfully assigned default/hello-node-5758569b79-6qsvn to functional-509202
	  Normal  Pulling    26s   kubelet            spec.containers{echo-server}: Pulling image "kicbase/echo-server"
	
	
	Name:             mysql-7d7b65bc95-qhht8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-509202/192.168.39.198
	Start Time:       Fri, 19 Dec 2025 02:40:59 +0000
	Labels:           app=mysql
	                  pod-template-hash=7d7b65bc95
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/mysql-7d7b65bc95
	Containers:
	  mysql:
	    Container ID:   
	    Image:          public.ecr.aws/docker/library/mysql:8.4
	    Image ID:       
	    Port:           3306/TCP (mysql)
	    Host Port:      0/TCP (mysql)
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Limits:
	      cpu:     700m
	      memory:  700Mi
	    Requests:
	      cpu:     600m
	      memory:  512Mi
	    Environment:
	      MYSQL_ROOT_PASSWORD:  password
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-479kh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-479kh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   Burstable
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  12s   default-scheduler  Successfully assigned default/mysql-7d7b65bc95-qhht8 to functional-509202
	  Normal  Pulling    12s   kubelet            spec.containers{mysql}: Pulling image "public.ecr.aws/docker/library/mysql:8.4"
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-509202/192.168.39.198
	Start Time:       Fri, 19 Dec 2025 02:40:42 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          public.ecr.aws/nginx/nginx:alpine
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ContainerCreating
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-glt8r (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-glt8r:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  29s   default-scheduler  Successfully assigned default/sp-pod to functional-509202
	  Normal  Pulling    28s   kubelet            spec.containers{myfrontend}: Pulling image "public.ecr.aws/nginx/nginx:alpine"

                                                
                                                
-- /stdout --
helpers_test.go:294: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd FAILED: end of post-mortem logs <<<
helpers_test.go:295: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DashboardCmd (37.69s)

                                                
                                    
x
+
TestISOImage/Binaries/crictl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/crictl
=== PAUSE TestISOImage/Binaries/crictl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/crictl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which crictl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which crictl": context deadline exceeded (2.942µs)
iso_test.go:78: failed to verify existence of "crictl" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which crictl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/crictl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/curl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/curl
=== PAUSE TestISOImage/Binaries/curl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/curl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which curl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which curl": context deadline exceeded (643ns)
iso_test.go:78: failed to verify existence of "curl" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which curl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/curl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/docker (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/docker
=== PAUSE TestISOImage/Binaries/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/docker
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which docker"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which docker": context deadline exceeded (329ns)
iso_test.go:78: failed to verify existence of "docker" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which docker\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/docker (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/git (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/git
=== PAUSE TestISOImage/Binaries/git

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/git
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which git"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which git": context deadline exceeded (559ns)
iso_test.go:78: failed to verify existence of "git" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which git\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/git (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/iptables (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/iptables
=== PAUSE TestISOImage/Binaries/iptables

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/iptables
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which iptables"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which iptables": context deadline exceeded (388ns)
iso_test.go:78: failed to verify existence of "iptables" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which iptables\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/iptables (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/podman (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/podman
=== PAUSE TestISOImage/Binaries/podman

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/podman
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which podman"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which podman": context deadline exceeded (425ns)
iso_test.go:78: failed to verify existence of "podman" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which podman\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/podman (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/rsync (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/rsync
=== PAUSE TestISOImage/Binaries/rsync

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/rsync
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which rsync"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which rsync": context deadline exceeded (507ns)
iso_test.go:78: failed to verify existence of "rsync" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which rsync\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/rsync (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/socat (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/socat
=== PAUSE TestISOImage/Binaries/socat

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/socat
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which socat"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which socat": context deadline exceeded (317ns)
iso_test.go:78: failed to verify existence of "socat" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which socat\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/socat (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/wget (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/wget
=== PAUSE TestISOImage/Binaries/wget

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/wget
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which wget"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which wget": context deadline exceeded (285ns)
iso_test.go:78: failed to verify existence of "wget" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which wget\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/wget (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxControl (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxControl
=== PAUSE TestISOImage/Binaries/VBoxControl

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxControl
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which VBoxControl"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which VBoxControl": context deadline exceeded (494ns)
iso_test.go:78: failed to verify existence of "VBoxControl" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which VBoxControl\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/VBoxControl (0.00s)

                                                
                                    
x
+
TestISOImage/Binaries/VBoxService (0s)

                                                
                                                
=== RUN   TestISOImage/Binaries/VBoxService
=== PAUSE TestISOImage/Binaries/VBoxService

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/Binaries/VBoxService
iso_test.go:76: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "which VBoxService"
iso_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "which VBoxService": context deadline exceeded (612ns)
iso_test.go:78: failed to verify existence of "VBoxService" binary : args "out/minikube-linux-amd64 -p guest-269272 ssh \"which VBoxService\"": context deadline exceeded
--- FAIL: TestISOImage/Binaries/VBoxService (0.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.62s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:338: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-638861 -n old-k8s-version-638861
start_stop_delete_test.go:272: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:45:17.659694466 +0000 UTC m=+4804.474323288
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-638861 -n old-k8s-version-638861
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-638861 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-638861 logs -n 25: (1.677937269s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────
───────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────
───────────┤
	│ ssh     │ -p bridge-694633 sudo cat /etc/containerd/config.toml                                                                                                                                                                                             │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo containerd config dump                                                                                                                                                                                                      │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │                     │
	│ ssh     │ -p bridge-694633 sudo systemctl cat crio --no-pager                                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                     │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo crio config                                                                                                                                                                                                                 │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p bridge-694633                                                                                                                                                                                                                                  │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p disable-driver-mounts-477416                                                                                                                                                                                                                   │ disable-driver-mounts-477416 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ stop    │ -p old-k8s-version-638861 --alsologtostderr -v=3                                                                                                                                                                                                  │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-728806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p no-preload-728806 --alsologtostderr -v=3                                                                                                                                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p embed-certs-832734 --alsologtostderr -v=3                                                                                                                                                                                                      │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ stop    │ -p default-k8s-diff-port-382606 --alsologtostderr -v=3                                                                                                                                                                                            │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────
───────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:36:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:36:29.621083   51711 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:36:29.621200   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621205   51711 out.go:374] Setting ErrFile to fd 2...
	I1219 03:36:29.621212   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621491   51711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:36:29.622131   51711 out.go:368] Setting JSON to false
	I1219 03:36:29.623408   51711 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4729,"bootTime":1766110661,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:36:29.623486   51711 start.go:143] virtualization: kvm guest
	I1219 03:36:29.625670   51711 out.go:179] * [default-k8s-diff-port-382606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:36:29.633365   51711 notify.go:221] Checking for updates...
	I1219 03:36:29.633417   51711 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:36:29.635075   51711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:36:29.636942   51711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:29.638374   51711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:36:29.639842   51711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:36:29.641026   51711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:36:29.642747   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:29.643478   51711 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:36:29.700163   51711 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:36:29.701162   51711 start.go:309] selected driver: kvm2
	I1219 03:36:29.701180   51711 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.701323   51711 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:36:29.702837   51711 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:29.702885   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:29.702957   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:29.703020   51711 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.703150   51711 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:36:29.704494   51711 out.go:179] * Starting "default-k8s-diff-port-382606" primary control-plane node in "default-k8s-diff-port-382606" cluster
	I1219 03:36:29.705691   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:29.705751   51711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:36:29.705771   51711 cache.go:65] Caching tarball of preloaded images
	I1219 03:36:29.705892   51711 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:36:29.705927   51711 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:36:29.706078   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:29.706318   51711 start.go:360] acquireMachinesLock for default-k8s-diff-port-382606: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:36:29.706374   51711 start.go:364] duration metric: took 32.309µs to acquireMachinesLock for "default-k8s-diff-port-382606"
	I1219 03:36:29.706388   51711 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:36:29.706395   51711 fix.go:54] fixHost starting: 
	I1219 03:36:29.708913   51711 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382606: state=Stopped err=<nil>
	W1219 03:36:29.708943   51711 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:36:27.974088   51386 addons.go:239] Setting addon default-storageclass=true in "embed-certs-832734"
	W1219 03:36:27.974109   51386 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:36:27.974136   51386 host.go:66] Checking if "embed-certs-832734" exists ...
	I1219 03:36:27.974565   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:36:27.974582   51386 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:36:27.974599   51386 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:27.974608   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:36:27.976663   51386 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:27.976691   51386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:36:27.976771   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.977846   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.977880   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.978136   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.979376   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979747   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979820   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.979860   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980122   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.980448   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.980482   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980686   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.981056   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981521   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.981545   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981792   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:28.331935   51386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:28.393904   51386 node_ready.go:35] waiting up to 6m0s for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398272   51386 node_ready.go:49] node "embed-certs-832734" is "Ready"
	I1219 03:36:28.398297   51386 node_ready.go:38] duration metric: took 4.336343ms for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398310   51386 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:28.398457   51386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:28.475709   51386 api_server.go:72] duration metric: took 507.310055ms to wait for apiserver process to appear ...
	I1219 03:36:28.475751   51386 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:28.475776   51386 api_server.go:253] Checking apiserver healthz at https://192.168.83.196:8443/healthz ...
	I1219 03:36:28.483874   51386 api_server.go:279] https://192.168.83.196:8443/healthz returned 200:
	ok
	I1219 03:36:28.485710   51386 api_server.go:141] control plane version: v1.34.3
	I1219 03:36:28.485738   51386 api_server.go:131] duration metric: took 9.978141ms to wait for apiserver health ...
	I1219 03:36:28.485751   51386 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:36:28.493956   51386 system_pods.go:59] 8 kube-system pods found
	I1219 03:36:28.493996   51386 system_pods.go:61] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.494024   51386 system_pods.go:61] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.494037   51386 system_pods.go:61] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.494044   51386 system_pods.go:61] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.494052   51386 system_pods.go:61] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.494058   51386 system_pods.go:61] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.494064   51386 system_pods.go:61] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.494074   51386 system_pods.go:61] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.494080   51386 system_pods.go:74] duration metric: took 8.32329ms to wait for pod list to return data ...
	I1219 03:36:28.494090   51386 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:36:28.500269   51386 default_sa.go:45] found service account: "default"
	I1219 03:36:28.500298   51386 default_sa.go:55] duration metric: took 6.200379ms for default service account to be created ...
	I1219 03:36:28.500309   51386 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:36:28.601843   51386 system_pods.go:86] 8 kube-system pods found
	I1219 03:36:28.601871   51386 system_pods.go:89] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.601880   51386 system_pods.go:89] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.601887   51386 system_pods.go:89] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.601892   51386 system_pods.go:89] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.601896   51386 system_pods.go:89] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.601902   51386 system_pods.go:89] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.601921   51386 system_pods.go:89] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.601930   51386 system_pods.go:89] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.601938   51386 system_pods.go:126] duration metric: took 101.621956ms to wait for k8s-apps to be running ...
	I1219 03:36:28.601947   51386 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:36:28.602031   51386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:36:28.618616   51386 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:36:28.685146   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:36:28.685175   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:36:28.694410   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:28.696954   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:28.726390   51386 system_svc.go:56] duration metric: took 124.434217ms WaitForService to wait for kubelet
	I1219 03:36:28.726426   51386 kubeadm.go:587] duration metric: took 758.032732ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:28.726450   51386 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:36:28.726520   51386 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:36:28.739364   51386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:36:28.739393   51386 node_conditions.go:123] node cpu capacity is 2
	I1219 03:36:28.739407   51386 node_conditions.go:105] duration metric: took 12.951551ms to run NodePressure ...
	I1219 03:36:28.739421   51386 start.go:242] waiting for startup goroutines ...
	I1219 03:36:28.774949   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:36:28.774981   51386 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:36:28.896758   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:28.896785   51386 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:36:29.110522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:31.016418   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.319423876s)
	I1219 03:36:31.016497   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.322025841s)
	I1219 03:36:31.016534   51386 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.28998192s)
	I1219 03:36:31.016597   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.906047637s)
	I1219 03:36:31.016610   51386 addons.go:500] Verifying addon metrics-server=true in "embed-certs-832734"
	I1219 03:36:31.016613   51386 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:36:29.711054   51711 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-382606" ...
	I1219 03:36:29.711101   51711 main.go:144] libmachine: starting domain...
	I1219 03:36:29.711116   51711 main.go:144] libmachine: ensuring networks are active...
	I1219 03:36:29.712088   51711 main.go:144] libmachine: Ensuring network default is active
	I1219 03:36:29.712549   51711 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-382606 is active
	I1219 03:36:29.713312   51711 main.go:144] libmachine: getting domain XML...
	I1219 03:36:29.714943   51711 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-382606</name>
	  <uuid>342506c1-9e12-4922-9438-23d9d57eea28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/default-k8s-diff-port-382606.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:fb:a4:4e'/>
	      <source network='mk-default-k8s-diff-port-382606'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:57:4f:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:36:31.342655   51711 main.go:144] libmachine: waiting for domain to start...
	I1219 03:36:31.345734   51711 main.go:144] libmachine: domain is now running
	I1219 03:36:31.345778   51711 main.go:144] libmachine: waiting for IP...
	I1219 03:36:31.347227   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348141   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has current primary IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348163   51711 main.go:144] libmachine: found domain IP: 192.168.72.129
	I1219 03:36:31.348170   51711 main.go:144] libmachine: reserving static IP address...
	I1219 03:36:31.348677   51711 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.348704   51711 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-382606 - found existing host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"}
	I1219 03:36:31.348713   51711 main.go:144] libmachine: reserved static IP address 192.168.72.129 for domain default-k8s-diff-port-382606
	I1219 03:36:31.348731   51711 main.go:144] libmachine: waiting for SSH...
	I1219 03:36:31.348741   51711 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:36:31.351582   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352122   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.352155   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352422   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:31.352772   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:31.352782   51711 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:36:34.417281   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:31.980522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:36:35.707529   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.726958549s)
	I1219 03:36:35.707614   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:36:36.641432   51386 addons.go:500] Verifying addon dashboard=true in "embed-certs-832734"
	I1219 03:36:36.645285   51386 out.go:179] * Verifying dashboard addon...
	I1219 03:36:36.647847   51386 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:36:36.659465   51386 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:36:36.659491   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.154819   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.652042   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.152461   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.651730   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.152475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.652155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.153311   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.652427   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:41.151837   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.497282   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:43.498703   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: connection refused
	I1219 03:36:41.654155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.154727   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.653186   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.152647   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.651177   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.154241   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.651752   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:45.152244   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.124796   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.151832   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.628602   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:46.632304   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.632730   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.632753   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.633056   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:46.633240   51711 machine.go:94] provisionDockerMachine start ...
	I1219 03:36:46.635441   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.635889   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.635934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.636109   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.636298   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.636308   51711 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:36:46.752911   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:36:46.752937   51711 buildroot.go:166] provisioning hostname "default-k8s-diff-port-382606"
	I1219 03:36:46.756912   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757425   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.757463   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757703   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.757935   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.757955   51711 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382606 && echo "default-k8s-diff-port-382606" | sudo tee /etc/hostname
	I1219 03:36:46.902266   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382606
	
	I1219 03:36:46.905791   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906293   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.906323   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906555   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.906758   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.906774   51711 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382606/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:36:47.045442   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:47.045472   51711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5003/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5003/.minikube}
	I1219 03:36:47.045496   51711 buildroot.go:174] setting up certificates
	I1219 03:36:47.045505   51711 provision.go:84] configureAuth start
	I1219 03:36:47.049643   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.050087   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.050115   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.052980   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053377   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.053417   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053596   51711 provision.go:143] copyHostCerts
	I1219 03:36:47.053653   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem, removing ...
	I1219 03:36:47.053678   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem
	I1219 03:36:47.053772   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem (1082 bytes)
	I1219 03:36:47.053902   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem, removing ...
	I1219 03:36:47.053919   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem
	I1219 03:36:47.053949   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem (1123 bytes)
	I1219 03:36:47.054027   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem, removing ...
	I1219 03:36:47.054036   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem
	I1219 03:36:47.054059   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem (1675 bytes)
	I1219 03:36:47.054113   51711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382606 san=[127.0.0.1 192.168.72.129 default-k8s-diff-port-382606 localhost minikube]
	I1219 03:36:47.093786   51711 provision.go:177] copyRemoteCerts
	I1219 03:36:47.093848   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:36:47.096938   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097402   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.097443   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097608   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.187589   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:36:47.229519   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:36:47.264503   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:36:47.294746   51711 provision.go:87] duration metric: took 249.22829ms to configureAuth
	I1219 03:36:47.294772   51711 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:36:47.294974   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:47.294990   51711 machine.go:97] duration metric: took 661.738495ms to provisionDockerMachine
	I1219 03:36:47.295000   51711 start.go:293] postStartSetup for "default-k8s-diff-port-382606" (driver="kvm2")
	I1219 03:36:47.295020   51711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:36:47.295079   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:36:47.297915   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298388   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.298414   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298592   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.391351   51711 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:36:47.396636   51711 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:36:47.396664   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/addons for local assets ...
	I1219 03:36:47.396734   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/files for local assets ...
	I1219 03:36:47.396833   51711 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem -> 89782.pem in /etc/ssl/certs
	I1219 03:36:47.396981   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:36:47.414891   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:47.450785   51711 start.go:296] duration metric: took 155.770681ms for postStartSetup
	I1219 03:36:47.450829   51711 fix.go:56] duration metric: took 17.744433576s for fixHost
	I1219 03:36:47.453927   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454408   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.454438   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454581   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:47.454774   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:47.454784   51711 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:36:47.578960   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766115407.541226750
	
	I1219 03:36:47.578984   51711 fix.go:216] guest clock: 1766115407.541226750
	I1219 03:36:47.578993   51711 fix.go:229] Guest: 2025-12-19 03:36:47.54122675 +0000 UTC Remote: 2025-12-19 03:36:47.450834556 +0000 UTC m=+17.907032910 (delta=90.392194ms)
	I1219 03:36:47.579033   51711 fix.go:200] guest clock delta is within tolerance: 90.392194ms
	I1219 03:36:47.579039   51711 start.go:83] releasing machines lock for "default-k8s-diff-port-382606", held for 17.872657006s
	I1219 03:36:47.582214   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.582699   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.582737   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.583361   51711 ssh_runner.go:195] Run: cat /version.json
	I1219 03:36:47.583439   51711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:36:47.586735   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.586965   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587209   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587236   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587400   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.587637   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587663   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587852   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.701374   51711 ssh_runner.go:195] Run: systemctl --version
	I1219 03:36:47.707956   51711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:36:47.714921   51711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:36:47.714993   51711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:36:47.736464   51711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:36:47.736487   51711 start.go:496] detecting cgroup driver to use...
	I1219 03:36:47.736550   51711 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:36:47.771913   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:36:47.789225   51711 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:36:47.789292   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:36:47.814503   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:36:47.832961   51711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:36:48.004075   51711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:36:48.227207   51711 docker.go:234] disabling docker service ...
	I1219 03:36:48.227297   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:36:48.245923   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:36:48.261992   51711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:36:48.443743   51711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:36:48.627983   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:36:48.647391   51711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:36:48.673139   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:36:48.690643   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:36:48.703896   51711 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 03:36:48.703949   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 03:36:48.718567   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.732932   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:36:48.749170   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.772676   51711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:36:48.787125   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:36:48.800190   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:36:48.812900   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:36:48.826147   51711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:36:48.841046   51711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:36:48.841107   51711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:36:48.867440   51711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:36:48.879351   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:49.048166   51711 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:36:49.092003   51711 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:36:49.092122   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:49.098374   51711 retry.go:31] will retry after 1.402478088s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1219 03:36:50.501086   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:50.509026   51711 start.go:564] Will wait 60s for crictl version
	I1219 03:36:50.509089   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:50.514426   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:36:50.554888   51711 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:36:50.554956   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.583326   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.611254   51711 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:36:46.651075   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.206126   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.654221   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.152458   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.651475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.152863   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.655859   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.152073   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.655613   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.153352   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.653895   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.151537   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.653336   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.156131   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.652752   51386 kapi.go:107] duration metric: took 17.00490252s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:36:53.654689   51386 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-832734 addons enable metrics-server
	
	I1219 03:36:53.656077   51386 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1219 03:36:50.615098   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615498   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:50.615532   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615798   51711 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1219 03:36:50.620834   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.637469   51711 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:36:50.637614   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:50.637684   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.668556   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.668578   51711 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:36:50.668632   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.703466   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.703488   51711 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:36:50.703495   51711 kubeadm.go:935] updating node { 192.168.72.129 8444 v1.34.3 containerd true true} ...
	I1219 03:36:50.703585   51711 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-382606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:36:50.703648   51711 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:36:50.734238   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:50.734260   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:50.734277   51711 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:36:50.734306   51711 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382606 NodeName:default-k8s-diff-port-382606 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:36:50.734471   51711 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-382606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.129"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:36:50.734558   51711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:36:50.746945   51711 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:36:50.746995   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:36:50.758948   51711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1219 03:36:50.782923   51711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:36:50.807164   51711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1219 03:36:50.829562   51711 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I1219 03:36:50.833888   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.849703   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:51.014216   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:51.062118   51711 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606 for IP: 192.168.72.129
	I1219 03:36:51.062147   51711 certs.go:195] generating shared ca certs ...
	I1219 03:36:51.062168   51711 certs.go:227] acquiring lock for ca certs: {Name:mk6db7e23547b9013e447eaa0ddba18e05213211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.062409   51711 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key
	I1219 03:36:51.062517   51711 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key
	I1219 03:36:51.062542   51711 certs.go:257] generating profile certs ...
	I1219 03:36:51.062681   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/client.key
	I1219 03:36:51.062791   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key.13c41c2b
	I1219 03:36:51.062855   51711 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key
	I1219 03:36:51.063062   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem (1338 bytes)
	W1219 03:36:51.063113   51711 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978_empty.pem, impossibly tiny 0 bytes
	I1219 03:36:51.063130   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:36:51.063176   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:36:51.063218   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:36:51.063256   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem (1675 bytes)
	I1219 03:36:51.063324   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:51.064049   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:36:51.108621   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:36:51.164027   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:36:51.199337   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:36:51.234216   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:36:51.283158   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:36:51.314148   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:36:51.344498   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:36:51.374002   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:36:51.403858   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem --> /usr/share/ca-certificates/8978.pem (1338 bytes)
	I1219 03:36:51.438346   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /usr/share/ca-certificates/89782.pem (1708 bytes)
	I1219 03:36:51.476174   51711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:36:51.499199   51711 ssh_runner.go:195] Run: openssl version
	I1219 03:36:51.506702   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.518665   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8978.pem /etc/ssl/certs/8978.pem
	I1219 03:36:51.530739   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536107   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:37 /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536167   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.543417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:36:51.554750   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8978.pem /etc/ssl/certs/51391683.0
	I1219 03:36:51.566106   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.577342   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89782.pem /etc/ssl/certs/89782.pem
	I1219 03:36:51.588583   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594342   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:37 /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594386   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.602417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.614493   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89782.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.626108   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.638273   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:36:51.650073   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655546   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655600   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.662728   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:36:51.675457   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:36:51.687999   51711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:36:51.693178   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:36:51.700656   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:36:51.708623   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:36:51.715865   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:36:51.725468   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:36:51.732847   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:36:51.739988   51711 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:51.740068   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:36:51.740145   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.779756   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.779780   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.779786   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.779790   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.779794   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.779800   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.779804   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.779808   51711 cri.go:92] found id: ""
	I1219 03:36:51.779864   51711 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1219 03:36:51.796814   51711 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:36:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1219 03:36:51.796914   51711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:36:51.809895   51711 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:36:51.809912   51711 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:36:51.809956   51711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:36:51.821465   51711 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:36:51.822684   51711 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382606" does not appear in /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:51.823576   51711 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5003/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382606" cluster setting kubeconfig missing "default-k8s-diff-port-382606" context setting]
	I1219 03:36:51.824679   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.826925   51711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:36:51.838686   51711 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.129
	I1219 03:36:51.838723   51711 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:36:51.838740   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1219 03:36:51.838793   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.874959   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.874981   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.874995   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.874998   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.875001   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.875004   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.875019   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.875022   51711 cri.go:92] found id: ""
	I1219 03:36:51.875027   51711 cri.go:255] Stopping containers: [64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c]
	I1219 03:36:51.875080   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:51.879700   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c
	I1219 03:36:51.939513   51711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:36:51.985557   51711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:36:51.999714   51711 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:36:51.999739   51711 kubeadm.go:158] found existing configuration files:
	
	I1219 03:36:51.999807   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:36:52.011529   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:36:52.011594   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:36:52.023630   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:36:52.036507   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:36:52.036566   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:36:52.048019   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.061421   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:36:52.061498   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.073436   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:36:52.084186   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:36:52.084244   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:36:52.098426   51711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:36:52.111056   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:52.261515   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.323343   51711 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.061779829s)
	I1219 03:36:54.323428   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.593075   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:53.657242   51386 addons.go:546] duration metric: took 25.688774629s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1219 03:36:53.657289   51386 start.go:247] waiting for cluster config update ...
	I1219 03:36:53.657306   51386 start.go:256] writing updated cluster config ...
	I1219 03:36:53.657575   51386 ssh_runner.go:195] Run: rm -f paused
	I1219 03:36:53.663463   51386 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:53.667135   51386 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.672738   51386 pod_ready.go:94] pod "coredns-66bc5c9577-4csbt" is "Ready"
	I1219 03:36:53.672765   51386 pod_ready.go:86] duration metric: took 5.607283ms for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.675345   51386 pod_ready.go:83] waiting for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.679709   51386 pod_ready.go:94] pod "etcd-embed-certs-832734" is "Ready"
	I1219 03:36:53.679732   51386 pod_ready.go:86] duration metric: took 4.36675ms for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.681513   51386 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.685784   51386 pod_ready.go:94] pod "kube-apiserver-embed-certs-832734" is "Ready"
	I1219 03:36:53.685803   51386 pod_ready.go:86] duration metric: took 4.273628ms for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.688112   51386 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.068844   51386 pod_ready.go:94] pod "kube-controller-manager-embed-certs-832734" is "Ready"
	I1219 03:36:54.068878   51386 pod_ready.go:86] duration metric: took 380.74628ms for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.268799   51386 pod_ready.go:83] waiting for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.668935   51386 pod_ready.go:94] pod "kube-proxy-j49gn" is "Ready"
	I1219 03:36:54.668971   51386 pod_ready.go:86] duration metric: took 400.137967ms for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.868862   51386 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269481   51386 pod_ready.go:94] pod "kube-scheduler-embed-certs-832734" is "Ready"
	I1219 03:36:55.269512   51386 pod_ready.go:86] duration metric: took 400.62266ms for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269530   51386 pod_ready.go:40] duration metric: took 1.60604049s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:55.329865   51386 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:36:55.331217   51386 out.go:179] * Done! kubectl is now configured to use "embed-certs-832734" cluster and "default" namespace by default
	I1219 03:36:54.658040   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.764830   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:54.764901   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.265628   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.765546   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.265137   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.294858   51711 api_server.go:72] duration metric: took 1.53003596s to wait for apiserver process to appear ...
	I1219 03:36:56.294894   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:56.294920   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:56.295516   51711 api_server.go:269] stopped: https://192.168.72.129:8444/healthz: Get "https://192.168.72.129:8444/healthz": dial tcp 192.168.72.129:8444: connect: connection refused
	I1219 03:36:56.795253   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.818365   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.818396   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:36:59.818426   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.867609   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.867642   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:37:00.295133   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.300691   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.300720   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:00.795111   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.825034   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.825068   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.295554   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.307047   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.307078   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.795401   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.800055   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.800091   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.295888   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.302103   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.302125   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.795818   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.802296   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.802326   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:03.296021   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:03.301661   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:03.310379   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:03.310412   51711 api_server.go:131] duration metric: took 7.01550899s to wait for apiserver health ...
	I1219 03:37:03.310425   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:37:03.310437   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:37:03.312477   51711 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:37:03.313819   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:37:03.331177   51711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:37:03.360466   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:03.365800   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:03.365852   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:37:03.365866   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:03.365876   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:03.365889   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:37:03.365896   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:03.365910   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:03.365918   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:03.365924   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:03.365935   51711 system_pods.go:74] duration metric: took 5.441032ms to wait for pod list to return data ...
	I1219 03:37:03.365944   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:03.369512   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:03.369539   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:03.369553   51711 node_conditions.go:105] duration metric: took 3.601059ms to run NodePressure ...
	I1219 03:37:03.369618   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:37:03.647329   51711 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651092   51711 kubeadm.go:744] kubelet initialised
	I1219 03:37:03.651116   51711 kubeadm.go:745] duration metric: took 3.75629ms waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651137   51711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:37:03.667607   51711 ops.go:34] apiserver oom_adj: -16
	I1219 03:37:03.667629   51711 kubeadm.go:602] duration metric: took 11.857709737s to restartPrimaryControlPlane
	I1219 03:37:03.667638   51711 kubeadm.go:403] duration metric: took 11.927656699s to StartCluster
	I1219 03:37:03.667662   51711 settings.go:142] acquiring lock: {Name:mk7f7ba85357bfc9fca2e66b70b16d967ca355d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.667744   51711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:37:03.669684   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.669943   51711 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:37:03.670026   51711 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:37:03.670125   51711 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670141   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:37:03.670153   51711 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670165   51711 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670174   51711 addons.go:248] addon metrics-server should already be in state true
	I1219 03:37:03.670145   51711 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382606"
	I1219 03:37:03.670175   51711 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670219   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.670222   51711 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382606"
	I1219 03:37:03.670185   51711 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670315   51711 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670328   51711 addons.go:248] addon dashboard should already be in state true
	I1219 03:37:03.670352   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	W1219 03:37:03.670200   51711 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:37:03.670428   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.671212   51711 out.go:179] * Verifying Kubernetes components...
	I1219 03:37:03.672712   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:37:03.673624   51711 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:03.673642   51711 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:37:03.674241   51711 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:37:03.674256   51711 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:37:03.674842   51711 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.674857   51711 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:37:03.674871   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.675431   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:37:03.675448   51711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:37:03.675481   51711 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:03.675502   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:37:03.677064   51711 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:03.677081   51711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:37:03.677620   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678481   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.678567   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678872   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.680203   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680419   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680904   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.680934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681162   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681407   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681444   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681467   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681685   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681950   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681982   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.682175   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.929043   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:37:03.969693   51711 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:04.174684   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:04.182529   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:04.184635   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:37:04.184660   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:37:04.197532   51711 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:37:04.242429   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:37:04.242455   51711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:37:04.309574   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:04.309600   51711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:37:04.367754   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:05.660040   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.485300577s)
	I1219 03:37:05.660070   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.477513606s)
	I1219 03:37:05.660116   51711 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.462552784s)
	I1219 03:37:05.660185   51711 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:37:05.673056   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.305263658s)
	I1219 03:37:05.673098   51711 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-382606"
	I1219 03:37:05.673137   51711 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	W1219 03:37:05.974619   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:06.630759   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	W1219 03:37:08.472974   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:10.195765   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.56493028s)
	I1219 03:37:10.195868   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:10.536948   51711 node_ready.go:49] node "default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:10.536984   51711 node_ready.go:38] duration metric: took 6.567254454s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:10.536999   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:37:10.537074   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:37:10.631962   51711 api_server.go:72] duration metric: took 6.961979571s to wait for apiserver process to appear ...
	I1219 03:37:10.631998   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:37:10.632041   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:10.633102   51711 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-382606"
	I1219 03:37:10.637827   51711 out.go:179] * Verifying dashboard addon...
	I1219 03:37:10.641108   51711 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:37:10.648897   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:10.650072   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:10.650099   51711 api_server.go:131] duration metric: took 18.093601ms to wait for apiserver health ...
	I1219 03:37:10.650110   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:10.655610   51711 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:37:10.655627   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:10.657971   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:10.657998   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.658023   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.658033   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.658042   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.658048   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.658055   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.658064   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.658069   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.658080   51711 system_pods.go:74] duration metric: took 7.963499ms to wait for pod list to return data ...
	I1219 03:37:10.658089   51711 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:37:10.668090   51711 default_sa.go:45] found service account: "default"
	I1219 03:37:10.668118   51711 default_sa.go:55] duration metric: took 10.020956ms for default service account to be created ...
	I1219 03:37:10.668130   51711 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:37:10.680469   51711 system_pods.go:86] 8 kube-system pods found
	I1219 03:37:10.680493   51711 system_pods.go:89] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.680507   51711 system_pods.go:89] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.680513   51711 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.680520   51711 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.680525   51711 system_pods.go:89] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.680532   51711 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.680540   51711 system_pods.go:89] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.680555   51711 system_pods.go:89] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.680567   51711 system_pods.go:126] duration metric: took 12.428884ms to wait for k8s-apps to be running ...
	I1219 03:37:10.680577   51711 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:37:10.680634   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:37:10.723844   51711 system_svc.go:56] duration metric: took 43.258925ms WaitForService to wait for kubelet
	I1219 03:37:10.723871   51711 kubeadm.go:587] duration metric: took 7.05389644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:37:10.723887   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:10.731598   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:10.731620   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:10.731629   51711 node_conditions.go:105] duration metric: took 7.738835ms to run NodePressure ...
	I1219 03:37:10.731640   51711 start.go:242] waiting for startup goroutines ...
	I1219 03:37:11.145699   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:11.645111   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.144952   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.644987   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.151074   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.645695   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.146399   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.645725   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.146044   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.645372   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.145700   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.645126   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.145189   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.645089   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.151071   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.645879   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.145525   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.645572   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.144405   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.647145   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.145368   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.653732   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.146443   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.645800   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.145131   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.644929   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.145023   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.646072   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.145868   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.647994   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.147617   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.648227   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.149067   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.645432   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.145986   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.645392   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:29.149926   51711 kapi.go:107] duration metric: took 18.508817791s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:37:29.152664   51711 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382606 addons enable metrics-server
	
	I1219 03:37:29.153867   51711 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:37:29.155085   51711 addons.go:546] duration metric: took 25.485078365s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:37:29.155131   51711 start.go:247] waiting for cluster config update ...
	I1219 03:37:29.155147   51711 start.go:256] writing updated cluster config ...
	I1219 03:37:29.156022   51711 ssh_runner.go:195] Run: rm -f paused
	I1219 03:37:29.170244   51711 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:29.178962   51711 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.186205   51711 pod_ready.go:94] pod "coredns-66bc5c9577-bzq6s" is "Ready"
	I1219 03:37:29.186234   51711 pod_ready.go:86] duration metric: took 7.24885ms for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.280615   51711 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.286426   51711 pod_ready.go:94] pod "etcd-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.286446   51711 pod_ready.go:86] duration metric: took 5.805885ms for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.288885   51711 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.293769   51711 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.293787   51711 pod_ready.go:86] duration metric: took 4.884445ms for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.296432   51711 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.576349   51711 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.576388   51711 pod_ready.go:86] duration metric: took 279.933458ms for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.777084   51711 pod_ready.go:83] waiting for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.176016   51711 pod_ready.go:94] pod "kube-proxy-vhml9" is "Ready"
	I1219 03:37:30.176047   51711 pod_ready.go:86] duration metric: took 398.930848ms for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.377206   51711 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776837   51711 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:30.776861   51711 pod_ready.go:86] duration metric: took 399.600189ms for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776872   51711 pod_ready.go:40] duration metric: took 1.606601039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:30.827211   51711 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:37:30.828493   51711 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-382606" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	32e15240fa31d       6e38f40d628db       8 minutes ago       Running             storage-provisioner                    2                   fc9d52e71753c       storage-provisioner                                     kube-system
	2d520f6777674       d9cbc9f4053ca       8 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   9513325997612       kubernetes-dashboard-metrics-scraper-6b5c7dc479-5pgl4   kubernetes-dashboard
	566a8988155f0       a0607af4fcd8a       8 minutes ago       Running             kubernetes-dashboard-api               0                   c0abe52d6c6ca       kubernetes-dashboard-api-595fbdbc7-hzt4g                kubernetes-dashboard
	690cda17bf449       dd54374d0ab14       9 minutes ago       Running             kubernetes-dashboard-auth              0                   1bcee8cab018e       kubernetes-dashboard-auth-db88b997-gj8f6                kubernetes-dashboard
	d52a807299555       59f642f485d26       9 minutes ago       Running             kubernetes-dashboard-web               0                   fc3434f3f7ba7       kubernetes-dashboard-web-858bd7466-g2wg2                kubernetes-dashboard
	6f5166ce8bc44       3a975970da2f5       9 minutes ago       Running             proxy                                  0                   c8875f4a5d6d7       kubernetes-dashboard-kong-f487b85cd-qdxp2               kubernetes-dashboard
	8186b14c17a6b       3a975970da2f5       9 minutes ago       Exited              clear-stale-pid                        0                   c8875f4a5d6d7       kubernetes-dashboard-kong-f487b85cd-qdxp2               kubernetes-dashboard
	7fda8ec32cf13       ead0a4a53df89       9 minutes ago       Running             coredns                                1                   20a7fc075bd0c       coredns-5dd5756b68-k7zvn                                kube-system
	275ea642aa833       56cc512116c8f       9 minutes ago       Running             busybox                                1                   6646a9ecc362c       busybox                                                 default
	faa5402ab5f1d       6e38f40d628db       9 minutes ago       Exited              storage-provisioner                    1                   fc9d52e71753c       storage-provisioner                                     kube-system
	6afdb246cf175       ea1030da44aa1       9 minutes ago       Running             kube-proxy                             1                   2849645f72689       kube-proxy-r6bwr                                        kube-system
	a4dbb1c53b812       73deb9a3f7025       9 minutes ago       Running             etcd                                   1                   112d85f8fde58       etcd-old-k8s-version-638861                             kube-system
	a4a00d791c075       bb5e0dde9054c       9 minutes ago       Running             kube-apiserver                         1                   3f560778cf693       kube-apiserver-old-k8s-version-638861                   kube-system
	1507e32b71c8a       f6f496300a2ae       9 minutes ago       Running             kube-scheduler                         1                   94bdeee2e5e7d       kube-scheduler-old-k8s-version-638861                   kube-system
	975da5b753f18       4be79c38a4bab       9 minutes ago       Running             kube-controller-manager                1                   8fab925eeb22c       kube-controller-manager-old-k8s-version-638861          kube-system
	24824a169681e       56cc512116c8f       11 minutes ago      Exited              busybox                                0                   859af02e82bf5       busybox                                                 default
	d47d05341c2f9       ead0a4a53df89       12 minutes ago      Exited              coredns                                0                   6b2d1447a7785       coredns-5dd5756b68-k7zvn                                kube-system
	e5763ced197c9       ea1030da44aa1       12 minutes ago      Exited              kube-proxy                             0                   7e2fca6297d8b       kube-proxy-r6bwr                                        kube-system
	3cb460b28aa41       f6f496300a2ae       12 minutes ago      Exited              kube-scheduler                         0                   8cf7f9dd3da91       kube-scheduler-old-k8s-version-638861                   kube-system
	599c003858b08       73deb9a3f7025       12 minutes ago      Exited              etcd                                   0                   26c400e7c9080       etcd-old-k8s-version-638861                             kube-system
	14158cc611fbd       bb5e0dde9054c       12 minutes ago      Exited              kube-apiserver                         0                   e967b7f4adf13       kube-apiserver-old-k8s-version-638861                   kube-system
	ac0b2043b72f6       4be79c38a4bab       12 minutes ago      Exited              kube-controller-manager                0                   ac1cf0689249e       kube-controller-manager-old-k8s-version-638861          kube-system
	
	
	==> containerd <==
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.605277120Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd5225a08-4f82-42ca-b33b-94119eea214d/566a8988155f0c73fe28a7611d3c1a996b9a9a85b5153a4dd94a4c634f8c4136/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.606209235Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod110b4cbc-e16d-4cd9-aaf8-7a4854204c6a/7fda8ec32cf13372ee428ed3e5ef4d415391b5e7629bd309e4076f358fb5b547/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.607204775Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod232e90c8-378e-4a62-8ccb-850d56e8acce/d52a807299555a94ad54a215d26ab0bffd13574ee93cb182b433b954ec256f20/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.607981493Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podf14b3d00-4033-4d89-af48-44a049c36335/32e15240fa31df7bf6ce24ce3792bc601ae9273a3725283056e797deaf01c1f2/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.608761799Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod138dcbda4f97b7a2e9859168d1696321/a4a00d791c0757b381bb071135629d62efcd7b058175bb441c82dabdcc84b8ff/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.609522739Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod4abe0b08-38b3-47be-8b35-e371e18dd0f4/6f5166ce8bc44cac494ec06c2ee37b5e3aacb57bfdcef32deb6d4c2965410180/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.610403819Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod717b544a-6cb1-48ca-a26e-1bc94bcb2c3f/2d520f677767433bfa20bc5b35e0550c36fbc692b2b50339245aa19b39b6d1f6/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.611619784Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod392ecf98f6f3f2d486999d713279a0a8/975da5b753f1848982b3baa9f93ea7de92ce4b2a5abcbf196c20705bad18c267/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.612666855Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pode9ed91c694fe54412cab040f01555e9a/1507e32b71c8aba99c55cc5db0063bd12b16ed088ef31e17345b85fa936f3675/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.613684256Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podac9701989312c5fe54cdbb595c769cfa/a4dbb1c53b8122726334f86af00d8ac478bd098e855d230304cdac0be53a0e23/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.614350033Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod18c93c96-a5a7-4399-91e6-4a8e4ece1364/6afdb246cf175b86939d5a92ab5440220f2b51c26ea9c52137a9a3bdf281eb3b/hugetlb.2MB.events\""
	Dec 19 03:45:01 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:01.615237403Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod7c5cb69b-fc76-4a6b-ac31-d7eb130fce30/690cda17bf449652a4a2ff13e235583f0f51760f993845afe19091eb7bcfcc3b/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.635819006Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod110b4cbc-e16d-4cd9-aaf8-7a4854204c6a/7fda8ec32cf13372ee428ed3e5ef4d415391b5e7629bd309e4076f358fb5b547/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.636843310Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod232e90c8-378e-4a62-8ccb-850d56e8acce/d52a807299555a94ad54a215d26ab0bffd13574ee93cb182b433b954ec256f20/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.637831951Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podf14b3d00-4033-4d89-af48-44a049c36335/32e15240fa31df7bf6ce24ce3792bc601ae9273a3725283056e797deaf01c1f2/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.638792641Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod138dcbda4f97b7a2e9859168d1696321/a4a00d791c0757b381bb071135629d62efcd7b058175bb441c82dabdcc84b8ff/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.640409415Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod4abe0b08-38b3-47be-8b35-e371e18dd0f4/6f5166ce8bc44cac494ec06c2ee37b5e3aacb57bfdcef32deb6d4c2965410180/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.641446705Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod717b544a-6cb1-48ca-a26e-1bc94bcb2c3f/2d520f677767433bfa20bc5b35e0550c36fbc692b2b50339245aa19b39b6d1f6/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.642835965Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod392ecf98f6f3f2d486999d713279a0a8/975da5b753f1848982b3baa9f93ea7de92ce4b2a5abcbf196c20705bad18c267/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.643838542Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pode9ed91c694fe54412cab040f01555e9a/1507e32b71c8aba99c55cc5db0063bd12b16ed088ef31e17345b85fa936f3675/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.644818296Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podac9701989312c5fe54cdbb595c769cfa/a4dbb1c53b8122726334f86af00d8ac478bd098e855d230304cdac0be53a0e23/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.645741112Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod18c93c96-a5a7-4399-91e6-4a8e4ece1364/6afdb246cf175b86939d5a92ab5440220f2b51c26ea9c52137a9a3bdf281eb3b/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.646807234Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod7c5cb69b-fc76-4a6b-ac31-d7eb130fce30/690cda17bf449652a4a2ff13e235583f0f51760f993845afe19091eb7bcfcc3b/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.648374469Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podbe434df5-3dc3-46ee-afab-a94b9048072e/275ea642aa8335ee67952db483469728a7b4618659737f819176fb2d425ae4e6/hugetlb.2MB.events\""
	Dec 19 03:45:11 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:45:11.649650656Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd5225a08-4f82-42ca-b33b-94119eea214d/566a8988155f0c73fe28a7611d3c1a996b9a9a85b5153a4dd94a4c634f8c4136/hugetlb.2MB.events\""
	
	
	==> coredns [7fda8ec32cf13372ee428ed3e5ef4d415391b5e7629bd309e4076f358fb5b547] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41531 - 54427 "HINFO IN 4055901202491664803.6192370230818033698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019008068s
	
	
	==> coredns [d47d05341c2f9312312755e83708494ed9b6626dc49261ca6470871aad909790] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:54015 - 10535 "HINFO IN 4001215073591724234.6939015659602496373. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014938124s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-638861
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-638861
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-638861
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_32_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-638861
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:45:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:41:51 +0000   Fri, 19 Dec 2025 03:32:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:41:51 +0000   Fri, 19 Dec 2025 03:32:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:41:51 +0000   Fri, 19 Dec 2025 03:32:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:41:51 +0000   Fri, 19 Dec 2025 03:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.183
	  Hostname:    old-k8s-version-638861
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ec830c5d817463d857d71a1ab5fac56
	  System UUID:                4ec830c5-d817-463d-857d-71a1ab5fac56
	  Boot ID:                    21dff12a-5acb-466b-b20a-28df67d9021e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-5dd5756b68-k7zvn                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-old-k8s-version-638861                              100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-old-k8s-version-638861                    250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-old-k8s-version-638861           200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-r6bwr                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-old-k8s-version-638861                    100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-57f55c9bc5-n4sjv                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        kubernetes-dashboard-api-595fbdbc7-hzt4g                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m21s
	  kubernetes-dashboard        kubernetes-dashboard-auth-db88b997-gj8f6                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m21s
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-qdxp2                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m21s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-5pgl4    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m21s
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-g2wg2                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m21s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 12m                    kube-proxy       
	  Normal  Starting                 9m33s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  12m                    kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                    kubelet          Node old-k8s-version-638861 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m                    kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientPID
	  Normal  NodeReady                12m                    kubelet          Node old-k8s-version-638861 status is now: NodeReady
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                    node-controller  Node old-k8s-version-638861 event: Registered Node old-k8s-version-638861 in Controller
	  Normal  Starting                 9m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m39s (x8 over 9m39s)  kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m39s (x8 over 9m39s)  kubelet          Node old-k8s-version-638861 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m39s (x7 over 9m39s)  kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m21s                  node-controller  Node old-k8s-version-638861 event: Registered Node old-k8s-version-638861 in Controller
	
	
	==> dmesg <==
	[Dec19 03:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002532] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.851035] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103924] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.693625] kauditd_printk_skb: 208 callbacks suppressed
	[  +4.544134] kauditd_printk_skb: 272 callbacks suppressed
	[  +0.133853] kauditd_printk_skb: 41 callbacks suppressed
	[Dec19 03:36] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.199700] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.772464] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.344696] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.678146] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [599c003858b08bf2788501dd83ece0914816ff86f6cb26fe31b48a4eef02f9c7] <==
	{"level":"info","ts":"2025-12-19T03:33:01.778677Z","caller":"traceutil/trace.go:171","msg":"trace[418481790] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-r6bwr; range_end:; response_count:1; response_revision:319; }","duration":"216.146441ms","start":"2025-12-19T03:33:01.562524Z","end":"2025-12-19T03:33:01.778671Z","steps":["trace[418481790] 'agreement among raft nodes before linearized reading'  (duration: 216.086596ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:01.779772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.760624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-12-19T03:33:01.779812Z","caller":"traceutil/trace.go:171","msg":"trace[1476294962] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:321; }","duration":"186.806663ms","start":"2025-12-19T03:33:01.592996Z","end":"2025-12-19T03:33:01.779802Z","steps":["trace[1476294962] 'agreement among raft nodes before linearized reading'  (duration: 186.708471ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:01.780068Z","caller":"traceutil/trace.go:171","msg":"trace[990989111] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"212.942526ms","start":"2025-12-19T03:33:01.567115Z","end":"2025-12-19T03:33:01.780058Z","steps":["trace[990989111] 'process raft request'  (duration: 212.472985ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:01.780222Z","caller":"traceutil/trace.go:171","msg":"trace[106583097] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"211.202561ms","start":"2025-12-19T03:33:01.569005Z","end":"2025-12-19T03:33:01.780207Z","steps":["trace[106583097] 'process raft request'  (duration: 210.663108ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:01.780499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.568322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-12-19T03:33:01.780526Z","caller":"traceutil/trace.go:171","msg":"trace[1705747924] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:321; }","duration":"137.600838ms","start":"2025-12-19T03:33:01.642919Z","end":"2025-12-19T03:33:01.780519Z","steps":["trace[1705747924] 'agreement among raft nodes before linearized reading'  (duration: 137.550156ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:01.780629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.138053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-12-19T03:33:01.780648Z","caller":"traceutil/trace.go:171","msg":"trace[222383032] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:321; }","duration":"166.165944ms","start":"2025-12-19T03:33:01.614476Z","end":"2025-12-19T03:33:01.780642Z","steps":["trace[222383032] 'agreement among raft nodes before linearized reading'  (duration: 166.11971ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:11.189241Z","caller":"traceutil/trace.go:171","msg":"trace[1604052810] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"171.589199ms","start":"2025-12-19T03:33:11.01763Z","end":"2025-12-19T03:33:11.189219Z","steps":["trace[1604052810] 'process raft request'  (duration: 171.468684ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:11.526126Z","caller":"traceutil/trace.go:171","msg":"trace[1366067802] linearizableReadLoop","detail":"{readStateIndex:411; appliedIndex:410; }","duration":"137.90963ms","start":"2025-12-19T03:33:11.388195Z","end":"2025-12-19T03:33:11.526104Z","steps":["trace[1366067802] 'read index received'  (duration: 119.926789ms)","trace[1366067802] 'applied index is now lower than readState.Index'  (duration: 17.981718ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:11.526269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.076643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:33:11.526341Z","caller":"traceutil/trace.go:171","msg":"trace[1645059783] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:396; }","duration":"138.165896ms","start":"2025-12-19T03:33:11.388164Z","end":"2025-12-19T03:33:11.52633Z","steps":["trace[1645059783] 'agreement among raft nodes before linearized reading'  (duration: 138.045932ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:15.527883Z","caller":"traceutil/trace.go:171","msg":"trace[109695888] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:420; }","duration":"140.347346ms","start":"2025-12-19T03:33:15.387457Z","end":"2025-12-19T03:33:15.527804Z","steps":["trace[109695888] 'read index received'  (duration: 140.13905ms)","trace[109695888] 'applied index is now lower than readState.Index'  (duration: 206.986µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:15.528019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.567698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:33:15.528078Z","caller":"traceutil/trace.go:171","msg":"trace[1901639264] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:405; }","duration":"140.641232ms","start":"2025-12-19T03:33:15.387425Z","end":"2025-12-19T03:33:15.528067Z","steps":["trace[1901639264] 'agreement among raft nodes before linearized reading'  (duration: 140.544548ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:15.52835Z","caller":"traceutil/trace.go:171","msg":"trace[809415947] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"316.603947ms","start":"2025-12-19T03:33:15.21173Z","end":"2025-12-19T03:33:15.528334Z","steps":["trace[809415947] 'process raft request'  (duration: 315.918602ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:15.53019Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:33:15.211718Z","time spent":"316.719886ms","remote":"127.0.0.1:54632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1107,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:399 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1034 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-19T03:33:15.96468Z","caller":"traceutil/trace.go:171","msg":"trace[1003698116] linearizableReadLoop","detail":"{readStateIndex:422; appliedIndex:421; }","duration":"262.22972ms","start":"2025-12-19T03:33:15.702429Z","end":"2025-12-19T03:33:15.964658Z","steps":["trace[1003698116] 'read index received'  (duration: 231.879953ms)","trace[1003698116] 'applied index is now lower than readState.Index'  (duration: 30.3488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:15.964906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.480731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-k7zvn\" ","response":"range_response_count:1 size:4751"}
	{"level":"info","ts":"2025-12-19T03:33:15.964937Z","caller":"traceutil/trace.go:171","msg":"trace[1392165153] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-k7zvn; range_end:; response_count:1; response_revision:406; }","duration":"262.521705ms","start":"2025-12-19T03:33:15.702406Z","end":"2025-12-19T03:33:15.964928Z","steps":["trace[1392165153] 'agreement among raft nodes before linearized reading'  (duration: 262.360771ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:15.965114Z","caller":"traceutil/trace.go:171","msg":"trace[261807475] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"280.698225ms","start":"2025-12-19T03:33:15.684408Z","end":"2025-12-19T03:33:15.965106Z","steps":["trace[261807475] 'process raft request'  (duration: 249.859586ms)","trace[261807475] 'compare'  (duration: 30.272809ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:33:53.984686Z","caller":"traceutil/trace.go:171","msg":"trace[1754295826] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"245.190395ms","start":"2025-12-19T03:33:53.739476Z","end":"2025-12-19T03:33:53.984666Z","steps":["trace[1754295826] 'process raft request'  (duration: 244.997458ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:57.749258Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.067607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5183"}
	{"level":"info","ts":"2025-12-19T03:33:57.749706Z","caller":"traceutil/trace.go:171","msg":"trace[401862548] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:486; }","duration":"170.574305ms","start":"2025-12-19T03:33:57.579117Z","end":"2025-12-19T03:33:57.749691Z","steps":["trace[401862548] 'range keys from in-memory index tree'  (duration: 169.881247ms)"],"step_count":1}
	
	
	==> etcd [a4dbb1c53b8122726334f86af00d8ac478bd098e855d230304cdac0be53a0e23] <==
	{"level":"info","ts":"2025-12-19T03:35:42.595955Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgPreVoteResp from 378cdee1d1b27193 at term 2"}
	{"level":"info","ts":"2025-12-19T03:35:42.595987Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became candidate at term 3"}
	{"level":"info","ts":"2025-12-19T03:35:42.595998Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 received MsgVoteResp from 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2025-12-19T03:35:42.596032Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"378cdee1d1b27193 became leader at term 3"}
	{"level":"info","ts":"2025-12-19T03:35:42.596062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 378cdee1d1b27193 elected leader 378cdee1d1b27193 at term 3"}
	{"level":"info","ts":"2025-12-19T03:35:42.598301Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"378cdee1d1b27193","local-member-attributes":"{Name:old-k8s-version-638861 ClientURLs:[https://192.168.61.183:2379]}","request-path":"/0/members/378cdee1d1b27193/attributes","cluster-id":"438aa8919cf6d084","publish-timeout":"7s"}
	{"level":"info","ts":"2025-12-19T03:35:42.598543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:35:42.599004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:35:42.601137Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.183:2379"}
	{"level":"info","ts":"2025-12-19T03:35:42.601973Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:35:42.601997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:35:42.602104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:36:05.107336Z","caller":"traceutil/trace.go:171","msg":"trace[558099464] transaction","detail":"{read_only:false; response_revision:754; number_of_response:1; }","duration":"144.532222ms","start":"2025-12-19T03:36:04.962722Z","end":"2025-12-19T03:36:05.107254Z","steps":["trace[558099464] 'process raft request'  (duration: 144.395307ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:15.331617Z","caller":"traceutil/trace.go:171","msg":"trace[1694338832] transaction","detail":"{read_only:false; response_revision:775; number_of_response:1; }","duration":"147.204329ms","start":"2025-12-19T03:36:15.184384Z","end":"2025-12-19T03:36:15.331588Z","steps":["trace[1694338832] 'process raft request'  (duration: 147.074948ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:15.39219Z","caller":"traceutil/trace.go:171","msg":"trace[856957208] linearizableReadLoop","detail":"{readStateIndex:826; appliedIndex:824; }","duration":"136.528638ms","start":"2025-12-19T03:36:15.255641Z","end":"2025-12-19T03:36:15.392169Z","steps":["trace[856957208] 'read index received'  (duration: 75.729675ms)","trace[856957208] 'applied index is now lower than readState.Index'  (duration: 60.798284ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:36:15.393227Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.012099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.183\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-12-19T03:36:15.393324Z","caller":"traceutil/trace.go:171","msg":"trace[47536450] range","detail":"{range_begin:/registry/masterleases/192.168.61.183; range_end:; response_count:1; response_revision:776; }","duration":"112.476946ms","start":"2025-12-19T03:36:15.280828Z","end":"2025-12-19T03:36:15.393305Z","steps":["trace[47536450] 'agreement among raft nodes before linearized reading'  (duration: 111.92194ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:15.39352Z","caller":"traceutil/trace.go:171","msg":"trace[1671633345] transaction","detail":"{read_only:false; response_revision:776; number_of_response:1; }","duration":"154.411351ms","start":"2025-12-19T03:36:15.239099Z","end":"2025-12-19T03:36:15.393511Z","steps":["trace[1671633345] 'process raft request'  (duration: 152.987524ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:15.393649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.030783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:5 size:31949"}
	{"level":"info","ts":"2025-12-19T03:36:15.393667Z","caller":"traceutil/trace.go:171","msg":"trace[1123717966] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:5; response_revision:776; }","duration":"138.054614ms","start":"2025-12-19T03:36:15.255605Z","end":"2025-12-19T03:36:15.39366Z","steps":["trace[1123717966] 'agreement among raft nodes before linearized reading'  (duration: 137.970422ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:36.175138Z","caller":"traceutil/trace.go:171","msg":"trace[1482774329] transaction","detail":"{read_only:false; response_revision:821; number_of_response:1; }","duration":"139.367478ms","start":"2025-12-19T03:36:36.035703Z","end":"2025-12-19T03:36:36.175071Z","steps":["trace[1482774329] 'process raft request'  (duration: 139.113844ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:36.500424Z","caller":"traceutil/trace.go:171","msg":"trace[261135074] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"129.321931ms","start":"2025-12-19T03:36:36.371081Z","end":"2025-12-19T03:36:36.500403Z","steps":["trace[261135074] 'process raft request'  (duration: 127.882661ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:45.967671Z","caller":"traceutil/trace.go:171","msg":"trace[1486780956] transaction","detail":"{read_only:false; response_revision:827; number_of_response:1; }","duration":"103.984761ms","start":"2025-12-19T03:36:45.863666Z","end":"2025-12-19T03:36:45.967651Z","steps":["trace[1486780956] 'process raft request'  (duration: 103.828387ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:46.113011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.186658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:36:46.113134Z","caller":"traceutil/trace.go:171","msg":"trace[1300966544] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:827; }","duration":"122.329587ms","start":"2025-12-19T03:36:45.990771Z","end":"2025-12-19T03:36:46.113101Z","steps":["trace[1300966544] 'count revisions from in-memory index tree'  (duration: 121.884937ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:45:18 up 9 min,  0 users,  load average: 0.15, 0.22, 0.17
	Linux old-k8s-version-638861 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [14158cc611fbd1a4ea8dd6a4977864f3368ec8909e3f0f4fee3b20942931d770] <==
	E1219 03:33:57.287969       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:33:57.289508       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:33:57.289543       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:33:57.297314       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:57.297374       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1219 03:33:57.297411       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:33:57.297438       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:33:57.297445       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:33:57.473248       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.102.77.221"}
	W1219 03:33:57.496616       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:57.498960       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1219 03:33:57.509080       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W1219 03:33:57.515235       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:57.517602       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1219 03:33:58.288458       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:58.288545       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:33:58.288556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:33:58.288727       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:58.288744       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:33:58.289713       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a4a00d791c0757b381bb071135629d62efcd7b058175bb441c82dabdcc84b8ff] <==
	E1219 03:40:45.179285       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:40:45.180539       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:41:44.045063       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.77.221:443: connect: connection refused
	I1219 03:41:44.045113       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:41:45.179979       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:41:45.180027       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:41:45.180039       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:41:45.181302       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:41:45.181389       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:41:45.181398       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:42:44.045589       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.77.221:443: connect: connection refused
	I1219 03:42:44.045684       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:43:44.046206       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.77.221:443: connect: connection refused
	I1219 03:43:44.046276       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:43:45.181164       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:43:45.181237       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:43:45.181245       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:43:45.182383       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:43:45.182521       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:43:45.182545       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:44:44.045990       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.77.221:443: connect: connection refused
	I1219 03:44:44.046054       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [975da5b753f1848982b3baa9f93ea7de92ce4b2a5abcbf196c20705bad18c267] <==
	I1219 03:39:37.370461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="182.129µs"
	E1219 03:39:57.411683       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:39:57.705473       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:40:27.422520       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:40:27.716167       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:40:57.429552       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:40:57.729305       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:41:27.438273       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:41:27.740964       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:41:57.446018       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:41:57.751975       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:42:03.369824       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="485.049µs"
	I1219 03:42:18.369764       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="609.977µs"
	E1219 03:42:27.455613       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:42:27.762666       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:42:57.467158       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:42:57.774261       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:43:27.479977       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:43:27.783298       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:43:57.487201       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:43:57.793489       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:44:27.494531       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:44:27.806590       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:44:57.501587       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:44:57.818823       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [ac0b2043b72f60c8bde3367de31e4b84f564861a647a19c0a8547ccdd0e4a432] <==
	I1219 03:33:01.973742       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-k7zvn"
	I1219 03:33:02.070075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.257851833s"
	I1219 03:33:02.231281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="161.147283ms"
	I1219 03:33:02.231396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.297µs"
	I1219 03:33:02.278520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.929µs"
	I1219 03:33:02.319035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="233.811µs"
	I1219 03:33:03.849420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.999µs"
	I1219 03:33:03.955463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="142.279µs"
	I1219 03:33:04.221462       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1219 03:33:04.258572       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hslxw"
	I1219 03:33:04.282166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.958564ms"
	I1219 03:33:04.296476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.576394ms"
	I1219 03:33:04.298270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="208.722µs"
	I1219 03:33:13.751329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.194µs"
	I1219 03:33:13.883973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="259.926µs"
	I1219 03:33:13.905278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="197.751µs"
	I1219 03:33:13.912121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.311µs"
	I1219 03:33:42.387536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.298125ms"
	I1219 03:33:42.388156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="552.951µs"
	I1219 03:33:57.316954       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I1219 03:33:57.346643       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-n4sjv"
	I1219 03:33:57.364040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="48.069258ms"
	I1219 03:33:57.395079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="30.97649ms"
	I1219 03:33:57.396892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="83.618µs"
	I1219 03:33:57.401790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="77.188µs"
	
	
	==> kube-proxy [6afdb246cf175b86939d5a92ab5440220f2b51c26ea9c52137a9a3bdf281eb3b] <==
	I1219 03:35:45.309114       1 server_others.go:69] "Using iptables proxy"
	I1219 03:35:45.329188       1 node.go:141] Successfully retrieved node IP: 192.168.61.183
	I1219 03:35:45.389287       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1219 03:35:45.389431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:35:45.392841       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:35:45.393529       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:35:45.394846       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:35:45.395324       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:45.398839       1 config.go:188] "Starting service config controller"
	I1219 03:35:45.400252       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:35:45.399156       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:35:45.400554       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:35:45.403351       1 config.go:315] "Starting node config controller"
	I1219 03:35:45.404475       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:35:45.501205       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:35:45.501276       1 shared_informer.go:318] Caches are synced for service config
	I1219 03:35:45.505070       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [e5763ced197c990a55ef49ee57c3d0117c14f750bfcbb78eeadf20d2e1ce8b21] <==
	I1219 03:33:03.435881       1 server_others.go:69] "Using iptables proxy"
	I1219 03:33:03.449363       1 node.go:141] Successfully retrieved node IP: 192.168.61.183
	I1219 03:33:03.539580       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1219 03:33:03.539621       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:33:03.542379       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:33:03.542440       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:33:03.542794       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:33:03.543208       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:33:03.544347       1 config.go:188] "Starting service config controller"
	I1219 03:33:03.544394       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:33:03.544518       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:33:03.544528       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:33:03.547122       1 config.go:315] "Starting node config controller"
	I1219 03:33:03.547154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:33:03.645077       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:33:03.645136       1 shared_informer.go:318] Caches are synced for service config
	I1219 03:33:03.647364       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1507e32b71c8aba99c55cc5db0063bd12b16ed088ef31e17345b85fa936f3675] <==
	I1219 03:35:41.534723       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:35:44.111617       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:35:44.111664       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:35:44.111675       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:35:44.111704       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:35:44.181530       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:35:44.181573       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:44.186347       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:35:44.187333       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:35:44.188371       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:35:44.189958       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:35:44.290438       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3cb460b28aa411920e98df6adcd7b37a2bc80e2092bf8f1a14621f8c687e104c] <==
	W1219 03:32:44.502102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1219 03:32:44.502604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1219 03:32:44.502162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1219 03:32:44.502652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1219 03:32:44.502212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:44.502691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:44.502281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:32:44.503051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:32:44.498124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:44.503064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.317590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:32:45.317641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:32:45.330568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:45.330617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.433621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:45.433784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.519107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1219 03:32:45.519151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1219 03:32:45.572045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:45.572183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.586405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1219 03:32:45.586685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1219 03:32:45.642160       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1219 03:32:45.642188       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1219 03:32:48.885958       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:42:39 old-k8s-version-638861 kubelet[1087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 03:42:39 old-k8s-version-638861 kubelet[1087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 03:42:46 old-k8s-version-638861 kubelet[1087]: E1219 03:42:46.352964    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:42:59 old-k8s-version-638861 kubelet[1087]: E1219 03:42:59.353465    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:43:11 old-k8s-version-638861 kubelet[1087]: E1219 03:43:11.353142    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:43:23 old-k8s-version-638861 kubelet[1087]: E1219 03:43:23.351443    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:43:35 old-k8s-version-638861 kubelet[1087]: E1219 03:43:35.352137    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:43:39 old-k8s-version-638861 kubelet[1087]: E1219 03:43:39.385856    1087 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 03:43:39 old-k8s-version-638861 kubelet[1087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 03:43:39 old-k8s-version-638861 kubelet[1087]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 03:43:39 old-k8s-version-638861 kubelet[1087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 03:43:39 old-k8s-version-638861 kubelet[1087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 03:43:50 old-k8s-version-638861 kubelet[1087]: E1219 03:43:50.352471    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:44:01 old-k8s-version-638861 kubelet[1087]: E1219 03:44:01.353086    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:44:14 old-k8s-version-638861 kubelet[1087]: E1219 03:44:14.352499    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:44:27 old-k8s-version-638861 kubelet[1087]: E1219 03:44:27.353471    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:44:39 old-k8s-version-638861 kubelet[1087]: E1219 03:44:39.386516    1087 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 03:44:39 old-k8s-version-638861 kubelet[1087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 03:44:39 old-k8s-version-638861 kubelet[1087]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 03:44:39 old-k8s-version-638861 kubelet[1087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 03:44:39 old-k8s-version-638861 kubelet[1087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 03:44:40 old-k8s-version-638861 kubelet[1087]: E1219 03:44:40.352804    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:44:52 old-k8s-version-638861 kubelet[1087]: E1219 03:44:52.352052    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:45:04 old-k8s-version-638861 kubelet[1087]: E1219 03:45:04.353003    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:45:16 old-k8s-version-638861 kubelet[1087]: E1219 03:45:16.352078    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	
	
	==> kubernetes-dashboard [2d520f677767433bfa20bc5b35e0550c36fbc692b2b50339245aa19b39b6d1f6] <==
	10.244.0.1 - - [19/Dec/2025:03:42:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:42:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:42:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:42:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:42:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:43:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:43:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:43:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:43:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:43:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:43:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:43:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:43:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:44:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:44:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:44:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:44:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:44:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:44:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:45:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:45:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	E1219 03:43:25.641591       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:44:25.642106       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [566a8988155f0c73fe28a7611d3c1a996b9a9a85b5153a4dd94a4c634f8c4136] <==
	I1219 03:36:22.141036       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:36:22.141152       1 init.go:49] Using in-cluster config
	I1219 03:36:22.142025       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:36:22.142241       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:36:22.142421       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:36:22.142431       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:36:22.151573       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:36:22.151605       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:36:22.229421       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:36:22.230314       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:36:52.238932       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [690cda17bf449652a4a2ff13e235583f0f51760f993845afe19091eb7bcfcc3b] <==
	I1219 03:36:18.393815       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:36:18.393977       1 init.go:49] Using in-cluster config
	I1219 03:36:18.394200       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [d52a807299555a94ad54a215d26ab0bffd13574ee93cb182b433b954ec256f20] <==
	I1219 03:36:14.840301       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:36:14.840566       1 init.go:48] Using in-cluster config
	I1219 03:36:14.841157       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [32e15240fa31df7bf6ce24ce3792bc601ae9273a3725283056e797deaf01c1f2] <==
	I1219 03:36:29.719731       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:36:29.736835       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:36:29.737971       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:36:47.192064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:36:47.193094       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-638861_e5fc271f-70ff-4288-add8-55a85b334ed9!
	I1219 03:36:47.196942       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1d6a796-c385-48cc-9e8d-36b9927d5f1f", APIVersion:"v1", ResourceVersion:"829", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-638861_e5fc271f-70ff-4288-add8-55a85b334ed9 became leader
	I1219 03:36:47.293584       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-638861_e5fc271f-70ff-4288-add8-55a85b334ed9!
	
	
	==> storage-provisioner [faa5402ab5f1dae317489202cbd7a47a83c8b119d88b43f090850212d610b624] <==
	I1219 03:35:45.081955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:36:15.118032       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-638861 -n old-k8s-version-638861
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-638861 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-n4sjv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-638861 describe pod metrics-server-57f55c9bc5-n4sjv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-638861 describe pod metrics-server-57f55c9bc5-n4sjv: exit status 1 (64.859787ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-n4sjv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-638861 describe pod metrics-server-57f55c9bc5-n4sjv: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (542.62s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.62s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-728806 -n no-preload-728806
start_stop_delete_test.go:272: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:45:25.059357578 +0000 UTC m=+4811.873986395
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-728806 -n no-preload-728806
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-728806 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-728806 logs -n 25: (1.712457177s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────
───────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────
───────────┤
	│ ssh     │ -p bridge-694633 sudo cat /etc/containerd/config.toml                                                                                                                                                                                             │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo containerd config dump                                                                                                                                                                                                      │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │                     │
	│ ssh     │ -p bridge-694633 sudo systemctl cat crio --no-pager                                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                     │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo crio config                                                                                                                                                                                                                 │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p bridge-694633                                                                                                                                                                                                                                  │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p disable-driver-mounts-477416                                                                                                                                                                                                                   │ disable-driver-mounts-477416 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ stop    │ -p old-k8s-version-638861 --alsologtostderr -v=3                                                                                                                                                                                                  │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-728806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p no-preload-728806 --alsologtostderr -v=3                                                                                                                                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p embed-certs-832734 --alsologtostderr -v=3                                                                                                                                                                                                      │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ stop    │ -p default-k8s-diff-port-382606 --alsologtostderr -v=3                                                                                                                                                                                            │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────
───────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:36:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:36:29.621083   51711 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:36:29.621200   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621205   51711 out.go:374] Setting ErrFile to fd 2...
	I1219 03:36:29.621212   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621491   51711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:36:29.622131   51711 out.go:368] Setting JSON to false
	I1219 03:36:29.623408   51711 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4729,"bootTime":1766110661,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:36:29.623486   51711 start.go:143] virtualization: kvm guest
	I1219 03:36:29.625670   51711 out.go:179] * [default-k8s-diff-port-382606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:36:29.633365   51711 notify.go:221] Checking for updates...
	I1219 03:36:29.633417   51711 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:36:29.635075   51711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:36:29.636942   51711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:29.638374   51711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:36:29.639842   51711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:36:29.641026   51711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:36:29.642747   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:29.643478   51711 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:36:29.700163   51711 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:36:29.701162   51711 start.go:309] selected driver: kvm2
	I1219 03:36:29.701180   51711 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.701323   51711 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:36:29.702837   51711 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:29.702885   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:29.702957   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:29.703020   51711 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.703150   51711 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:36:29.704494   51711 out.go:179] * Starting "default-k8s-diff-port-382606" primary control-plane node in "default-k8s-diff-port-382606" cluster
	I1219 03:36:29.705691   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:29.705751   51711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:36:29.705771   51711 cache.go:65] Caching tarball of preloaded images
	I1219 03:36:29.705892   51711 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:36:29.705927   51711 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:36:29.706078   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:29.706318   51711 start.go:360] acquireMachinesLock for default-k8s-diff-port-382606: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:36:29.706374   51711 start.go:364] duration metric: took 32.309µs to acquireMachinesLock for "default-k8s-diff-port-382606"
	I1219 03:36:29.706388   51711 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:36:29.706395   51711 fix.go:54] fixHost starting: 
	I1219 03:36:29.708913   51711 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382606: state=Stopped err=<nil>
	W1219 03:36:29.708943   51711 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:36:27.974088   51386 addons.go:239] Setting addon default-storageclass=true in "embed-certs-832734"
	W1219 03:36:27.974109   51386 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:36:27.974136   51386 host.go:66] Checking if "embed-certs-832734" exists ...
	I1219 03:36:27.974565   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:36:27.974582   51386 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:36:27.974599   51386 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:27.974608   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:36:27.976663   51386 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:27.976691   51386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:36:27.976771   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.977846   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.977880   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.978136   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.979376   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979747   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979820   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.979860   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980122   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.980448   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.980482   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980686   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.981056   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981521   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.981545   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981792   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:28.331935   51386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:28.393904   51386 node_ready.go:35] waiting up to 6m0s for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398272   51386 node_ready.go:49] node "embed-certs-832734" is "Ready"
	I1219 03:36:28.398297   51386 node_ready.go:38] duration metric: took 4.336343ms for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398310   51386 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:28.398457   51386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:28.475709   51386 api_server.go:72] duration metric: took 507.310055ms to wait for apiserver process to appear ...
	I1219 03:36:28.475751   51386 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:28.475776   51386 api_server.go:253] Checking apiserver healthz at https://192.168.83.196:8443/healthz ...
	I1219 03:36:28.483874   51386 api_server.go:279] https://192.168.83.196:8443/healthz returned 200:
	ok
	I1219 03:36:28.485710   51386 api_server.go:141] control plane version: v1.34.3
	I1219 03:36:28.485738   51386 api_server.go:131] duration metric: took 9.978141ms to wait for apiserver health ...
	I1219 03:36:28.485751   51386 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:36:28.493956   51386 system_pods.go:59] 8 kube-system pods found
	I1219 03:36:28.493996   51386 system_pods.go:61] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.494024   51386 system_pods.go:61] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.494037   51386 system_pods.go:61] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.494044   51386 system_pods.go:61] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.494052   51386 system_pods.go:61] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.494058   51386 system_pods.go:61] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.494064   51386 system_pods.go:61] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.494074   51386 system_pods.go:61] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.494080   51386 system_pods.go:74] duration metric: took 8.32329ms to wait for pod list to return data ...
	I1219 03:36:28.494090   51386 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:36:28.500269   51386 default_sa.go:45] found service account: "default"
	I1219 03:36:28.500298   51386 default_sa.go:55] duration metric: took 6.200379ms for default service account to be created ...
	I1219 03:36:28.500309   51386 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:36:28.601843   51386 system_pods.go:86] 8 kube-system pods found
	I1219 03:36:28.601871   51386 system_pods.go:89] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.601880   51386 system_pods.go:89] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.601887   51386 system_pods.go:89] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.601892   51386 system_pods.go:89] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.601896   51386 system_pods.go:89] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.601902   51386 system_pods.go:89] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.601921   51386 system_pods.go:89] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.601930   51386 system_pods.go:89] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.601938   51386 system_pods.go:126] duration metric: took 101.621956ms to wait for k8s-apps to be running ...
	I1219 03:36:28.601947   51386 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:36:28.602031   51386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:36:28.618616   51386 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:36:28.685146   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:36:28.685175   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:36:28.694410   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:28.696954   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:28.726390   51386 system_svc.go:56] duration metric: took 124.434217ms WaitForService to wait for kubelet
	I1219 03:36:28.726426   51386 kubeadm.go:587] duration metric: took 758.032732ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:28.726450   51386 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:36:28.726520   51386 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:36:28.739364   51386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:36:28.739393   51386 node_conditions.go:123] node cpu capacity is 2
	I1219 03:36:28.739407   51386 node_conditions.go:105] duration metric: took 12.951551ms to run NodePressure ...
	I1219 03:36:28.739421   51386 start.go:242] waiting for startup goroutines ...
	I1219 03:36:28.774949   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:36:28.774981   51386 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:36:28.896758   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:28.896785   51386 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:36:29.110522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:31.016418   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.319423876s)
	I1219 03:36:31.016497   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.322025841s)
	I1219 03:36:31.016534   51386 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.28998192s)
	I1219 03:36:31.016597   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.906047637s)
	I1219 03:36:31.016610   51386 addons.go:500] Verifying addon metrics-server=true in "embed-certs-832734"
	I1219 03:36:31.016613   51386 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:36:29.711054   51711 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-382606" ...
	I1219 03:36:29.711101   51711 main.go:144] libmachine: starting domain...
	I1219 03:36:29.711116   51711 main.go:144] libmachine: ensuring networks are active...
	I1219 03:36:29.712088   51711 main.go:144] libmachine: Ensuring network default is active
	I1219 03:36:29.712549   51711 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-382606 is active
	I1219 03:36:29.713312   51711 main.go:144] libmachine: getting domain XML...
	I1219 03:36:29.714943   51711 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-382606</name>
	  <uuid>342506c1-9e12-4922-9438-23d9d57eea28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/default-k8s-diff-port-382606.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:fb:a4:4e'/>
	      <source network='mk-default-k8s-diff-port-382606'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:57:4f:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:36:31.342655   51711 main.go:144] libmachine: waiting for domain to start...
	I1219 03:36:31.345734   51711 main.go:144] libmachine: domain is now running
	I1219 03:36:31.345778   51711 main.go:144] libmachine: waiting for IP...
	I1219 03:36:31.347227   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348141   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has current primary IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348163   51711 main.go:144] libmachine: found domain IP: 192.168.72.129
	I1219 03:36:31.348170   51711 main.go:144] libmachine: reserving static IP address...
	I1219 03:36:31.348677   51711 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.348704   51711 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-382606 - found existing host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"}
	I1219 03:36:31.348713   51711 main.go:144] libmachine: reserved static IP address 192.168.72.129 for domain default-k8s-diff-port-382606
	I1219 03:36:31.348731   51711 main.go:144] libmachine: waiting for SSH...
	I1219 03:36:31.348741   51711 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:36:31.351582   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352122   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.352155   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352422   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:31.352772   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:31.352782   51711 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:36:34.417281   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:31.980522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:36:35.707529   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.726958549s)
	I1219 03:36:35.707614   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:36:36.641432   51386 addons.go:500] Verifying addon dashboard=true in "embed-certs-832734"
	I1219 03:36:36.645285   51386 out.go:179] * Verifying dashboard addon...
	I1219 03:36:36.647847   51386 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:36:36.659465   51386 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:36:36.659491   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.154819   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.652042   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.152461   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.651730   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.152475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.652155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.153311   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.652427   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:41.151837   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.497282   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:43.498703   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: connection refused
	I1219 03:36:41.654155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.154727   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.653186   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.152647   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.651177   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.154241   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.651752   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:45.152244   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.124796   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.151832   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.628602   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:46.632304   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.632730   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.632753   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.633056   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:46.633240   51711 machine.go:94] provisionDockerMachine start ...
	I1219 03:36:46.635441   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.635889   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.635934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.636109   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.636298   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.636308   51711 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:36:46.752911   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:36:46.752937   51711 buildroot.go:166] provisioning hostname "default-k8s-diff-port-382606"
	I1219 03:36:46.756912   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757425   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.757463   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757703   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.757935   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.757955   51711 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382606 && echo "default-k8s-diff-port-382606" | sudo tee /etc/hostname
	I1219 03:36:46.902266   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382606
	
	I1219 03:36:46.905791   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906293   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.906323   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906555   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.906758   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.906774   51711 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382606/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:36:47.045442   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:47.045472   51711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5003/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5003/.minikube}
	I1219 03:36:47.045496   51711 buildroot.go:174] setting up certificates
	I1219 03:36:47.045505   51711 provision.go:84] configureAuth start
	I1219 03:36:47.049643   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.050087   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.050115   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.052980   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053377   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.053417   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053596   51711 provision.go:143] copyHostCerts
	I1219 03:36:47.053653   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem, removing ...
	I1219 03:36:47.053678   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem
	I1219 03:36:47.053772   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem (1082 bytes)
	I1219 03:36:47.053902   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem, removing ...
	I1219 03:36:47.053919   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem
	I1219 03:36:47.053949   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem (1123 bytes)
	I1219 03:36:47.054027   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem, removing ...
	I1219 03:36:47.054036   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem
	I1219 03:36:47.054059   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem (1675 bytes)
	I1219 03:36:47.054113   51711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382606 san=[127.0.0.1 192.168.72.129 default-k8s-diff-port-382606 localhost minikube]
	I1219 03:36:47.093786   51711 provision.go:177] copyRemoteCerts
	I1219 03:36:47.093848   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:36:47.096938   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097402   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.097443   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097608   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.187589   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:36:47.229519   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:36:47.264503   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:36:47.294746   51711 provision.go:87] duration metric: took 249.22829ms to configureAuth
	I1219 03:36:47.294772   51711 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:36:47.294974   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:47.294990   51711 machine.go:97] duration metric: took 661.738495ms to provisionDockerMachine
	I1219 03:36:47.295000   51711 start.go:293] postStartSetup for "default-k8s-diff-port-382606" (driver="kvm2")
	I1219 03:36:47.295020   51711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:36:47.295079   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:36:47.297915   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298388   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.298414   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298592   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.391351   51711 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:36:47.396636   51711 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:36:47.396664   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/addons for local assets ...
	I1219 03:36:47.396734   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/files for local assets ...
	I1219 03:36:47.396833   51711 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem -> 89782.pem in /etc/ssl/certs
	I1219 03:36:47.396981   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:36:47.414891   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:47.450785   51711 start.go:296] duration metric: took 155.770681ms for postStartSetup
	I1219 03:36:47.450829   51711 fix.go:56] duration metric: took 17.744433576s for fixHost
	I1219 03:36:47.453927   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454408   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.454438   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454581   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:47.454774   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:47.454784   51711 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:36:47.578960   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766115407.541226750
	
	I1219 03:36:47.578984   51711 fix.go:216] guest clock: 1766115407.541226750
	I1219 03:36:47.578993   51711 fix.go:229] Guest: 2025-12-19 03:36:47.54122675 +0000 UTC Remote: 2025-12-19 03:36:47.450834556 +0000 UTC m=+17.907032910 (delta=90.392194ms)
	I1219 03:36:47.579033   51711 fix.go:200] guest clock delta is within tolerance: 90.392194ms
	I1219 03:36:47.579039   51711 start.go:83] releasing machines lock for "default-k8s-diff-port-382606", held for 17.872657006s
	I1219 03:36:47.582214   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.582699   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.582737   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.583361   51711 ssh_runner.go:195] Run: cat /version.json
	I1219 03:36:47.583439   51711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:36:47.586735   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.586965   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587209   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587236   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587400   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.587637   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587663   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587852   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.701374   51711 ssh_runner.go:195] Run: systemctl --version
	I1219 03:36:47.707956   51711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:36:47.714921   51711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:36:47.714993   51711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:36:47.736464   51711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:36:47.736487   51711 start.go:496] detecting cgroup driver to use...
	I1219 03:36:47.736550   51711 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:36:47.771913   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:36:47.789225   51711 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:36:47.789292   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:36:47.814503   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:36:47.832961   51711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:36:48.004075   51711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:36:48.227207   51711 docker.go:234] disabling docker service ...
	I1219 03:36:48.227297   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:36:48.245923   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:36:48.261992   51711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:36:48.443743   51711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:36:48.627983   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:36:48.647391   51711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:36:48.673139   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:36:48.690643   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:36:48.703896   51711 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 03:36:48.703949   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 03:36:48.718567   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.732932   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:36:48.749170   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.772676   51711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:36:48.787125   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:36:48.800190   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:36:48.812900   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:36:48.826147   51711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:36:48.841046   51711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:36:48.841107   51711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:36:48.867440   51711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:36:48.879351   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:49.048166   51711 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:36:49.092003   51711 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:36:49.092122   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:49.098374   51711 retry.go:31] will retry after 1.402478088s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1219 03:36:50.501086   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:50.509026   51711 start.go:564] Will wait 60s for crictl version
	I1219 03:36:50.509089   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:50.514426   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:36:50.554888   51711 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:36:50.554956   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.583326   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.611254   51711 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:36:46.651075   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.206126   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.654221   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.152458   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.651475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.152863   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.655859   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.152073   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.655613   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.153352   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.653895   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.151537   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.653336   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.156131   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.652752   51386 kapi.go:107] duration metric: took 17.00490252s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:36:53.654689   51386 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-832734 addons enable metrics-server
	
	I1219 03:36:53.656077   51386 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1219 03:36:50.615098   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615498   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:50.615532   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615798   51711 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1219 03:36:50.620834   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.637469   51711 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:36:50.637614   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:50.637684   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.668556   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.668578   51711 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:36:50.668632   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.703466   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.703488   51711 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:36:50.703495   51711 kubeadm.go:935] updating node { 192.168.72.129 8444 v1.34.3 containerd true true} ...
	I1219 03:36:50.703585   51711 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-382606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:36:50.703648   51711 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:36:50.734238   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:50.734260   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:50.734277   51711 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:36:50.734306   51711 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382606 NodeName:default-k8s-diff-port-382606 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:36:50.734471   51711 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-382606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.129"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:36:50.734558   51711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:36:50.746945   51711 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:36:50.746995   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:36:50.758948   51711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1219 03:36:50.782923   51711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:36:50.807164   51711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1219 03:36:50.829562   51711 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I1219 03:36:50.833888   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.849703   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:51.014216   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:51.062118   51711 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606 for IP: 192.168.72.129
	I1219 03:36:51.062147   51711 certs.go:195] generating shared ca certs ...
	I1219 03:36:51.062168   51711 certs.go:227] acquiring lock for ca certs: {Name:mk6db7e23547b9013e447eaa0ddba18e05213211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.062409   51711 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key
	I1219 03:36:51.062517   51711 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key
	I1219 03:36:51.062542   51711 certs.go:257] generating profile certs ...
	I1219 03:36:51.062681   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/client.key
	I1219 03:36:51.062791   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key.13c41c2b
	I1219 03:36:51.062855   51711 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key
	I1219 03:36:51.063062   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem (1338 bytes)
	W1219 03:36:51.063113   51711 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978_empty.pem, impossibly tiny 0 bytes
	I1219 03:36:51.063130   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:36:51.063176   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:36:51.063218   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:36:51.063256   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem (1675 bytes)
	I1219 03:36:51.063324   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:51.064049   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:36:51.108621   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:36:51.164027   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:36:51.199337   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:36:51.234216   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:36:51.283158   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:36:51.314148   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:36:51.344498   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:36:51.374002   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:36:51.403858   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem --> /usr/share/ca-certificates/8978.pem (1338 bytes)
	I1219 03:36:51.438346   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /usr/share/ca-certificates/89782.pem (1708 bytes)
	I1219 03:36:51.476174   51711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:36:51.499199   51711 ssh_runner.go:195] Run: openssl version
	I1219 03:36:51.506702   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.518665   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8978.pem /etc/ssl/certs/8978.pem
	I1219 03:36:51.530739   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536107   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:37 /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536167   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.543417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:36:51.554750   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8978.pem /etc/ssl/certs/51391683.0
	I1219 03:36:51.566106   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.577342   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89782.pem /etc/ssl/certs/89782.pem
	I1219 03:36:51.588583   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594342   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:37 /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594386   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.602417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.614493   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89782.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.626108   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.638273   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:36:51.650073   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655546   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655600   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.662728   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:36:51.675457   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:36:51.687999   51711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:36:51.693178   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:36:51.700656   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:36:51.708623   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:36:51.715865   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:36:51.725468   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:36:51.732847   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:36:51.739988   51711 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:51.740068   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:36:51.740145   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.779756   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.779780   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.779786   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.779790   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.779794   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.779800   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.779804   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.779808   51711 cri.go:92] found id: ""
	I1219 03:36:51.779864   51711 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1219 03:36:51.796814   51711 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:36:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1219 03:36:51.796914   51711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:36:51.809895   51711 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:36:51.809912   51711 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:36:51.809956   51711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:36:51.821465   51711 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:36:51.822684   51711 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382606" does not appear in /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:51.823576   51711 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5003/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382606" cluster setting kubeconfig missing "default-k8s-diff-port-382606" context setting]
	I1219 03:36:51.824679   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.826925   51711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:36:51.838686   51711 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.129
	I1219 03:36:51.838723   51711 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:36:51.838740   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1219 03:36:51.838793   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.874959   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.874981   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.874995   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.874998   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.875001   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.875004   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.875019   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.875022   51711 cri.go:92] found id: ""
	I1219 03:36:51.875027   51711 cri.go:255] Stopping containers: [64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c]
	I1219 03:36:51.875080   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:51.879700   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c
	I1219 03:36:51.939513   51711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:36:51.985557   51711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:36:51.999714   51711 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:36:51.999739   51711 kubeadm.go:158] found existing configuration files:
	
	I1219 03:36:51.999807   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:36:52.011529   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:36:52.011594   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:36:52.023630   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:36:52.036507   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:36:52.036566   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:36:52.048019   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.061421   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:36:52.061498   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.073436   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:36:52.084186   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:36:52.084244   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:36:52.098426   51711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:36:52.111056   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:52.261515   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.323343   51711 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.061779829s)
	I1219 03:36:54.323428   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.593075   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:53.657242   51386 addons.go:546] duration metric: took 25.688774629s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1219 03:36:53.657289   51386 start.go:247] waiting for cluster config update ...
	I1219 03:36:53.657306   51386 start.go:256] writing updated cluster config ...
	I1219 03:36:53.657575   51386 ssh_runner.go:195] Run: rm -f paused
	I1219 03:36:53.663463   51386 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:53.667135   51386 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.672738   51386 pod_ready.go:94] pod "coredns-66bc5c9577-4csbt" is "Ready"
	I1219 03:36:53.672765   51386 pod_ready.go:86] duration metric: took 5.607283ms for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.675345   51386 pod_ready.go:83] waiting for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.679709   51386 pod_ready.go:94] pod "etcd-embed-certs-832734" is "Ready"
	I1219 03:36:53.679732   51386 pod_ready.go:86] duration metric: took 4.36675ms for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.681513   51386 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.685784   51386 pod_ready.go:94] pod "kube-apiserver-embed-certs-832734" is "Ready"
	I1219 03:36:53.685803   51386 pod_ready.go:86] duration metric: took 4.273628ms for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.688112   51386 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.068844   51386 pod_ready.go:94] pod "kube-controller-manager-embed-certs-832734" is "Ready"
	I1219 03:36:54.068878   51386 pod_ready.go:86] duration metric: took 380.74628ms for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.268799   51386 pod_ready.go:83] waiting for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.668935   51386 pod_ready.go:94] pod "kube-proxy-j49gn" is "Ready"
	I1219 03:36:54.668971   51386 pod_ready.go:86] duration metric: took 400.137967ms for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.868862   51386 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269481   51386 pod_ready.go:94] pod "kube-scheduler-embed-certs-832734" is "Ready"
	I1219 03:36:55.269512   51386 pod_ready.go:86] duration metric: took 400.62266ms for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269530   51386 pod_ready.go:40] duration metric: took 1.60604049s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:55.329865   51386 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:36:55.331217   51386 out.go:179] * Done! kubectl is now configured to use "embed-certs-832734" cluster and "default" namespace by default
	I1219 03:36:54.658040   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.764830   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:54.764901   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.265628   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.765546   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.265137   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.294858   51711 api_server.go:72] duration metric: took 1.53003596s to wait for apiserver process to appear ...
	I1219 03:36:56.294894   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:56.294920   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:56.295516   51711 api_server.go:269] stopped: https://192.168.72.129:8444/healthz: Get "https://192.168.72.129:8444/healthz": dial tcp 192.168.72.129:8444: connect: connection refused
	I1219 03:36:56.795253   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.818365   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.818396   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:36:59.818426   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.867609   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.867642   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:37:00.295133   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.300691   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.300720   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:00.795111   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.825034   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.825068   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.295554   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.307047   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.307078   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.795401   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.800055   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.800091   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.295888   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.302103   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.302125   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.795818   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.802296   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.802326   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:03.296021   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:03.301661   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:03.310379   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:03.310412   51711 api_server.go:131] duration metric: took 7.01550899s to wait for apiserver health ...
	I1219 03:37:03.310425   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:37:03.310437   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:37:03.312477   51711 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:37:03.313819   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:37:03.331177   51711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:37:03.360466   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:03.365800   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:03.365852   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:37:03.365866   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:03.365876   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:03.365889   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:37:03.365896   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:03.365910   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:03.365918   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:03.365924   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:03.365935   51711 system_pods.go:74] duration metric: took 5.441032ms to wait for pod list to return data ...
	I1219 03:37:03.365944   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:03.369512   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:03.369539   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:03.369553   51711 node_conditions.go:105] duration metric: took 3.601059ms to run NodePressure ...
	I1219 03:37:03.369618   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:37:03.647329   51711 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651092   51711 kubeadm.go:744] kubelet initialised
	I1219 03:37:03.651116   51711 kubeadm.go:745] duration metric: took 3.75629ms waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651137   51711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:37:03.667607   51711 ops.go:34] apiserver oom_adj: -16
	I1219 03:37:03.667629   51711 kubeadm.go:602] duration metric: took 11.857709737s to restartPrimaryControlPlane
	I1219 03:37:03.667638   51711 kubeadm.go:403] duration metric: took 11.927656699s to StartCluster
	I1219 03:37:03.667662   51711 settings.go:142] acquiring lock: {Name:mk7f7ba85357bfc9fca2e66b70b16d967ca355d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.667744   51711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:37:03.669684   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.669943   51711 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:37:03.670026   51711 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:37:03.670125   51711 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670141   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:37:03.670153   51711 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670165   51711 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670174   51711 addons.go:248] addon metrics-server should already be in state true
	I1219 03:37:03.670145   51711 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382606"
	I1219 03:37:03.670175   51711 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670219   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.670222   51711 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382606"
	I1219 03:37:03.670185   51711 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670315   51711 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670328   51711 addons.go:248] addon dashboard should already be in state true
	I1219 03:37:03.670352   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	W1219 03:37:03.670200   51711 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:37:03.670428   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.671212   51711 out.go:179] * Verifying Kubernetes components...
	I1219 03:37:03.672712   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:37:03.673624   51711 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:03.673642   51711 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:37:03.674241   51711 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:37:03.674256   51711 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:37:03.674842   51711 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.674857   51711 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:37:03.674871   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.675431   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:37:03.675448   51711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:37:03.675481   51711 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:03.675502   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:37:03.677064   51711 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:03.677081   51711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:37:03.677620   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678481   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.678567   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678872   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.680203   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680419   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680904   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.680934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681162   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681407   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681444   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681467   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681685   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681950   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681982   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.682175   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.929043   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:37:03.969693   51711 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:04.174684   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:04.182529   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:04.184635   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:37:04.184660   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:37:04.197532   51711 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:37:04.242429   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:37:04.242455   51711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:37:04.309574   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:04.309600   51711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:37:04.367754   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:05.660040   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.485300577s)
	I1219 03:37:05.660070   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.477513606s)
	I1219 03:37:05.660116   51711 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.462552784s)
	I1219 03:37:05.660185   51711 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:37:05.673056   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.305263658s)
	I1219 03:37:05.673098   51711 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-382606"
	I1219 03:37:05.673137   51711 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	W1219 03:37:05.974619   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:06.630759   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	W1219 03:37:08.472974   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:10.195765   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.56493028s)
	I1219 03:37:10.195868   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:10.536948   51711 node_ready.go:49] node "default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:10.536984   51711 node_ready.go:38] duration metric: took 6.567254454s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:10.536999   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:37:10.537074   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:37:10.631962   51711 api_server.go:72] duration metric: took 6.961979571s to wait for apiserver process to appear ...
	I1219 03:37:10.631998   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:37:10.632041   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:10.633102   51711 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-382606"
	I1219 03:37:10.637827   51711 out.go:179] * Verifying dashboard addon...
	I1219 03:37:10.641108   51711 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:37:10.648897   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:10.650072   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:10.650099   51711 api_server.go:131] duration metric: took 18.093601ms to wait for apiserver health ...
	I1219 03:37:10.650110   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:10.655610   51711 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:37:10.655627   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:10.657971   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:10.657998   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.658023   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.658033   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.658042   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.658048   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.658055   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.658064   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.658069   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.658080   51711 system_pods.go:74] duration metric: took 7.963499ms to wait for pod list to return data ...
	I1219 03:37:10.658089   51711 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:37:10.668090   51711 default_sa.go:45] found service account: "default"
	I1219 03:37:10.668118   51711 default_sa.go:55] duration metric: took 10.020956ms for default service account to be created ...
	I1219 03:37:10.668130   51711 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:37:10.680469   51711 system_pods.go:86] 8 kube-system pods found
	I1219 03:37:10.680493   51711 system_pods.go:89] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.680507   51711 system_pods.go:89] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.680513   51711 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.680520   51711 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.680525   51711 system_pods.go:89] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.680532   51711 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.680540   51711 system_pods.go:89] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.680555   51711 system_pods.go:89] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.680567   51711 system_pods.go:126] duration metric: took 12.428884ms to wait for k8s-apps to be running ...
	I1219 03:37:10.680577   51711 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:37:10.680634   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:37:10.723844   51711 system_svc.go:56] duration metric: took 43.258925ms WaitForService to wait for kubelet
	I1219 03:37:10.723871   51711 kubeadm.go:587] duration metric: took 7.05389644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:37:10.723887   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:10.731598   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:10.731620   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:10.731629   51711 node_conditions.go:105] duration metric: took 7.738835ms to run NodePressure ...
	I1219 03:37:10.731640   51711 start.go:242] waiting for startup goroutines ...
	I1219 03:37:11.145699   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:11.645111   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.144952   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.644987   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.151074   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.645695   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.146399   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.645725   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.146044   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.645372   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.145700   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.645126   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.145189   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.645089   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.151071   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.645879   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.145525   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.645572   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.144405   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.647145   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.145368   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.653732   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.146443   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.645800   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.145131   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.644929   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.145023   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.646072   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.145868   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.647994   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.147617   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.648227   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.149067   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.645432   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.145986   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.645392   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:29.149926   51711 kapi.go:107] duration metric: took 18.508817791s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:37:29.152664   51711 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382606 addons enable metrics-server
	
	I1219 03:37:29.153867   51711 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:37:29.155085   51711 addons.go:546] duration metric: took 25.485078365s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:37:29.155131   51711 start.go:247] waiting for cluster config update ...
	I1219 03:37:29.155147   51711 start.go:256] writing updated cluster config ...
	I1219 03:37:29.156022   51711 ssh_runner.go:195] Run: rm -f paused
	I1219 03:37:29.170244   51711 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:29.178962   51711 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.186205   51711 pod_ready.go:94] pod "coredns-66bc5c9577-bzq6s" is "Ready"
	I1219 03:37:29.186234   51711 pod_ready.go:86] duration metric: took 7.24885ms for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.280615   51711 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.286426   51711 pod_ready.go:94] pod "etcd-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.286446   51711 pod_ready.go:86] duration metric: took 5.805885ms for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.288885   51711 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.293769   51711 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.293787   51711 pod_ready.go:86] duration metric: took 4.884445ms for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.296432   51711 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.576349   51711 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.576388   51711 pod_ready.go:86] duration metric: took 279.933458ms for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.777084   51711 pod_ready.go:83] waiting for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.176016   51711 pod_ready.go:94] pod "kube-proxy-vhml9" is "Ready"
	I1219 03:37:30.176047   51711 pod_ready.go:86] duration metric: took 398.930848ms for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.377206   51711 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776837   51711 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:30.776861   51711 pod_ready.go:86] duration metric: took 399.600189ms for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776872   51711 pod_ready.go:40] duration metric: took 1.606601039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:30.827211   51711 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:37:30.828493   51711 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-382606" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	944453a82c5aa       6e38f40d628db       8 minutes ago       Running             storage-provisioner                    2                   ed7a60a20e2dc       storage-provisioner                                     kube-system
	57718e1249a5f       3a975970da2f5       8 minutes ago       Running             proxy                                  0                   5a6d18a9b02b5       kubernetes-dashboard-kong-78b7499b45-k5gpr              kubernetes-dashboard
	a1d949e23b7c7       3a975970da2f5       8 minutes ago       Exited              clear-stale-pid                        0                   5a6d18a9b02b5       kubernetes-dashboard-kong-78b7499b45-k5gpr              kubernetes-dashboard
	7b0a529acf54e       a0607af4fcd8a       9 minutes ago       Running             kubernetes-dashboard-api               0                   a9292ca47ec35       kubernetes-dashboard-api-68f55bc586-nnhm4               kubernetes-dashboard
	f43ad99d90fdc       59f642f485d26       9 minutes ago       Running             kubernetes-dashboard-web               0                   3265cd33f89ee       kubernetes-dashboard-web-7f7574785f-kl9q5               kubernetes-dashboard
	47bce22eb1c3d       d9cbc9f4053ca       9 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   ec4c4393f5099       kubernetes-dashboard-metrics-scraper-594bbfb84b-m7bxg   kubernetes-dashboard
	4ab0badeeeac9       dd54374d0ab14       9 minutes ago       Running             kubernetes-dashboard-auth              0                   7882aad4ead25       kubernetes-dashboard-auth-7f98f4d65c-88rd2              kubernetes-dashboard
	e62eb42de605a       aa5e3ebc0dfed       9 minutes ago       Running             coredns                                1                   a7280df0fffea       coredns-7d764666f9-x9688                                kube-system
	6b66d3ad8ceef       56cc512116c8f       9 minutes ago       Running             busybox                                1                   1a50d540f4edd       busybox                                                 default
	18e278297341f       af0321f3a4f38       9 minutes ago       Running             kube-proxy                             1                   68823e5c5e44c       kube-proxy-9kmrc                                        kube-system
	974c9df3fdb6e       6e38f40d628db       9 minutes ago       Exited              storage-provisioner                    1                   ed7a60a20e2dc       storage-provisioner                                     kube-system
	d4b2c0b372751       5032a56602e1b       9 minutes ago       Running             kube-controller-manager                1                   70dc97ffa012f       kube-controller-manager-no-preload-728806               kube-system
	69e38503d0f4a       0a108f7189562       9 minutes ago       Running             etcd                                   1                   ea01907d2d0ff       etcd-no-preload-728806                                  kube-system
	31fe13610d626       73f80cdc073da       9 minutes ago       Running             kube-scheduler                         1                   d4d20e11ed4b0       kube-scheduler-no-preload-728806                        kube-system
	8e9475c78e5b9       58865405a13bc       9 minutes ago       Running             kube-apiserver                         1                   45995925bd2de       kube-apiserver-no-preload-728806                        kube-system
	2eaa04351e239       56cc512116c8f       11 minutes ago      Exited              busybox                                0                   db288691afa59       busybox                                                 default
	0daf3a0d964c7       aa5e3ebc0dfed       12 minutes ago      Exited              coredns                                0                   bdbf4bbb83a3b       coredns-7d764666f9-x9688                                kube-system
	ba269e021c7b5       af0321f3a4f38       12 minutes ago      Exited              kube-proxy                             0                   f6b0e3b0f100e       kube-proxy-9kmrc                                        kube-system
	9e837e7d646d3       5032a56602e1b       12 minutes ago      Exited              kube-controller-manager                0                   82aa7824bae05       kube-controller-manager-no-preload-728806               kube-system
	e95ce55118a31       73f80cdc073da       12 minutes ago      Exited              kube-scheduler                         0                   92a7b35073023       kube-scheduler-no-preload-728806                        kube-system
	d76db54c93b48       0a108f7189562       12 minutes ago      Exited              etcd                                   0                   937b817e0c237       etcd-no-preload-728806                                  kube-system
	78fce5e539795       58865405a13bc       12 minutes ago      Exited              kube-apiserver                         0                   b28b150383205       kube-apiserver-no-preload-728806                        kube-system
	
	
	==> containerd <==
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.836706921Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2c9ed5ac1044fad5ac7c7ead9dca926a/8e9475c78e5b931b2d174b98c7f3095a1ee185519d00942f21d0ebf9092be1a0/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.838043844Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8b96b916-9a7f-4d6d-9e31-aad0b0358a6b/6b66d3ad8ceefa6126aa63b7ab94cd560eddd093ff9d8d775639f4c6f9183d7e/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.838753167Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod425da6bb99dfb8cef077665118dd8f70/69e38503d0f4a7b85114416cbd244c14828460424d87ddf9dcec627e11f6d019/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.839632090Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podc626899b-06fd-4952-810c-a87343019170/18e278297341f2ef6879a8f2a0674670948d8c3f6abb39a02ef05bd379a52a3e/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.840279670Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf36e31fd-042e-433e-a7c5-a134902d3898/e62eb42de605a3fd4f852cc3a22bbb2ed2dde85681502e3e3010ca301d879b82/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.841245943Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod81b57837-5fb9-47e8-8129-e21af498d464/944453a82c5aa9d3c5e09a2a924586b9708bf3f99f7878d28d61b7e7c1fee4c8/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.842331408Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod80fc013f-db9d-4834-a88c-6730bc1e786e/4ab0badeeeac9066abd92e7322c2ebdf2519d10c318b6dbb9979651de9f47051/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.843306022Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod750cce99-cc88-428c-8026-6bf7cf14c959/47bce22eb1c3d05d42b992a32a07de7295a66c8221d28877512dd638f7211103/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.844569513Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd2315740-909c-4982-abd4-594425918b9d/f43ad99d90fdcbc40aff01641cbe5c36c47cb8ac50d0891cd9bb37a3ba555617/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.846414319Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod72cb839e-11d5-4529-a6e4-b296f089c405/7b0a529acf54eff15dbdb1f47e2ad6584d347a66e1e80bcb4d9664c4444610fd/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.847509331Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod5b9fe8fcc4d0408f4baa26793dc5e565/31fe13610d626eb1358e924d8da122c03230ae2e208af3021661987752f9fb4a/hugetlb.2MB.events\""
	Dec 19 03:45:15 no-preload-728806 containerd[719]: time="2025-12-19T03:45:15.849269834Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod8d7c099790ad554c018d4a29ed4b2e09/d4b2c0b372751a1bc59d857101659a4a9bfe03f44e49264fc35f2b4bf8d1e6c2/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.873406743Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf36e31fd-042e-433e-a7c5-a134902d3898/e62eb42de605a3fd4f852cc3a22bbb2ed2dde85681502e3e3010ca301d879b82/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.875693011Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod81b57837-5fb9-47e8-8129-e21af498d464/944453a82c5aa9d3c5e09a2a924586b9708bf3f99f7878d28d61b7e7c1fee4c8/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.878900911Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod80fc013f-db9d-4834-a88c-6730bc1e786e/4ab0badeeeac9066abd92e7322c2ebdf2519d10c318b6dbb9979651de9f47051/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.880704337Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod750cce99-cc88-428c-8026-6bf7cf14c959/47bce22eb1c3d05d42b992a32a07de7295a66c8221d28877512dd638f7211103/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.881993947Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd2315740-909c-4982-abd4-594425918b9d/f43ad99d90fdcbc40aff01641cbe5c36c47cb8ac50d0891cd9bb37a3ba555617/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.885201118Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod72cb839e-11d5-4529-a6e4-b296f089c405/7b0a529acf54eff15dbdb1f47e2ad6584d347a66e1e80bcb4d9664c4444610fd/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.888021262Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod5b9fe8fcc4d0408f4baa26793dc5e565/31fe13610d626eb1358e924d8da122c03230ae2e208af3021661987752f9fb4a/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.889157945Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod8d7c099790ad554c018d4a29ed4b2e09/d4b2c0b372751a1bc59d857101659a4a9bfe03f44e49264fc35f2b4bf8d1e6c2/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.890219950Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda5072acb-a3af-46a6-9b98-5b0623b96a12/57718e1249a5f35c50a73ad17a1694996de936bb2a37159c31cbfa1e94a0efc9/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.891106077Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2c9ed5ac1044fad5ac7c7ead9dca926a/8e9475c78e5b931b2d174b98c7f3095a1ee185519d00942f21d0ebf9092be1a0/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.892317783Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8b96b916-9a7f-4d6d-9e31-aad0b0358a6b/6b66d3ad8ceefa6126aa63b7ab94cd560eddd093ff9d8d775639f4c6f9183d7e/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.893139990Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod425da6bb99dfb8cef077665118dd8f70/69e38503d0f4a7b85114416cbd244c14828460424d87ddf9dcec627e11f6d019/hugetlb.2MB.events\""
	Dec 19 03:45:25 no-preload-728806 containerd[719]: time="2025-12-19T03:45:25.893885326Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podc626899b-06fd-4952-810c-a87343019170/18e278297341f2ef6879a8f2a0674670948d8c3f6abb39a02ef05bd379a52a3e/hugetlb.2MB.events\""
	
	
	==> coredns [0daf3a0d964c769fdfcb3212d2577b256892a31a7645cfd15ea40a6da28089e8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47780 - 25751 "HINFO IN 5948633206316442089.7485547438095066407. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016123219s
	
	
	==> coredns [e62eb42de605a3fd4f852cc3a22bbb2ed2dde85681502e3e3010ca301d879b82] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52055 - 53030 "HINFO IN 742552530284827370.5350346264999543959. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.017263299s
	
	
	==> describe nodes <==
	Name:               no-preload-728806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-728806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-728806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_33_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:33:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-728806
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:45:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:43:57 +0000   Fri, 19 Dec 2025 03:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:43:57 +0000   Fri, 19 Dec 2025 03:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:43:57 +0000   Fri, 19 Dec 2025 03:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:43:57 +0000   Fri, 19 Dec 2025 03:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.172
	  Hostname:    no-preload-728806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 de6d11f26d144166878a0f5a46eed7b7
	  System UUID:                de6d11f2-6d14-4166-878a-0f5a46eed7b7
	  Boot ID:                    1ee2860e-8c54-45ec-b573-c2c3ef6b4e05
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-7d764666f9-x9688                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-no-preload-728806                                   100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-no-preload-728806                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-no-preload-728806                200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-9kmrc                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-no-preload-728806                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-5d785b57d4-9zx57                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        kubernetes-dashboard-api-68f55bc586-nnhm4                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m19s
	  kubernetes-dashboard        kubernetes-dashboard-auth-7f98f4d65c-88rd2               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m19s
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-k5gpr               0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m19s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-m7bxg    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m19s
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-kl9q5                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m19s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  RegisteredNode  12m    node-controller  Node no-preload-728806 event: Registered Node no-preload-728806 in Controller
	  Normal  RegisteredNode  9m24s  node-controller  Node no-preload-728806 event: Registered Node no-preload-728806 in Controller
	
	
	==> dmesg <==
	[Dec19 03:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006017] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.892610] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.104228] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.689117] kauditd_printk_skb: 199 callbacks suppressed
	[Dec19 03:36] kauditd_printk_skb: 227 callbacks suppressed
	[  +0.034544] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.043875] kauditd_printk_skb: 183 callbacks suppressed
	[  +6.308125] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.870794] kauditd_printk_skb: 32 callbacks suppressed
	[ +14.225211] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> etcd [69e38503d0f4a7b85114416cbd244c14828460424d87ddf9dcec627e11f6d019] <==
	{"level":"info","ts":"2025-12-19T03:36:08.189989Z","caller":"traceutil/trace.go:172","msg":"trace[2132561140] transaction","detail":"{read_only:false; response_revision:735; number_of_response:1; }","duration":"311.33942ms","start":"2025-12-19T03:36:07.878632Z","end":"2025-12-19T03:36:08.189971Z","steps":["trace[2132561140] 'process raft request'  (duration: 267.978521ms)","trace[2132561140] 'compare'  (duration: 43.11185ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:36:08.190127Z","caller":"traceutil/trace.go:172","msg":"trace[1703528734] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/default; range_end:; response_count:1; response_revision:734; }","duration":"145.087437ms","start":"2025-12-19T03:36:08.045022Z","end":"2025-12-19T03:36:08.190109Z","steps":["trace[1703528734] 'agreement among raft nodes before linearized reading'  (duration: 101.080938ms)","trace[1703528734] 'range keys from in-memory index tree'  (duration: 43.417487ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:36:08.190893Z","caller":"traceutil/trace.go:172","msg":"trace[1473546027] transaction","detail":"{read_only:false; response_revision:736; number_of_response:1; }","duration":"312.137156ms","start":"2025-12-19T03:36:07.878721Z","end":"2025-12-19T03:36:08.190858Z","steps":["trace[1473546027] 'process raft request'  (duration: 311.18664ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.191240Z","caller":"traceutil/trace.go:172","msg":"trace[1819298651] transaction","detail":"{read_only:false; response_revision:737; number_of_response:1; }","duration":"295.931198ms","start":"2025-12-19T03:36:07.895300Z","end":"2025-12-19T03:36:08.191231Z","steps":["trace[1819298651] 'process raft request'  (duration: 295.525454ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:08.193598Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:36:07.878612Z","time spent":"311.532399ms","remote":"127.0.0.1:51792","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5076,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard-api\" mod_revision:711 > success:<request_put:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard-api\" value_size:5001 >> failure:<request_range:<key:\"/registry/deployments/kubernetes-dashboard/kubernetes-dashboard-api\" > >"}
	{"level":"warn","ts":"2025-12-19T03:36:08.193834Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:36:07.878715Z","time spent":"312.198394ms","remote":"127.0.0.1:51852","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":12912,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45\" mod_revision:708 > success:<request_put:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45\" value_size:12825 >> failure:<request_range:<key:\"/registry/replicasets/kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45\" > >"}
	{"level":"info","ts":"2025-12-19T03:36:08.195727Z","caller":"traceutil/trace.go:172","msg":"trace[1369365917] transaction","detail":"{read_only:false; response_revision:738; number_of_response:1; }","duration":"297.126525ms","start":"2025-12-19T03:36:07.898591Z","end":"2025-12-19T03:36:08.195718Z","steps":["trace[1369365917] 'process raft request'  (duration: 292.605953ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:08.196595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.179425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/admin-user\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:36:08.196703Z","caller":"traceutil/trace.go:172","msg":"trace[1905526564] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/admin-user; range_end:; response_count:0; response_revision:743; }","duration":"145.289485ms","start":"2025-12-19T03:36:08.051406Z","end":"2025-12-19T03:36:08.196695Z","steps":["trace[1905526564] 'agreement among raft nodes before linearized reading'  (duration: 145.16394ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.198111Z","caller":"traceutil/trace.go:172","msg":"trace[420394175] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"224.163943ms","start":"2025-12-19T03:36:07.973933Z","end":"2025-12-19T03:36:08.198097Z","steps":["trace[420394175] 'process raft request'  (duration: 220.284119ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.198540Z","caller":"traceutil/trace.go:172","msg":"trace[127230267] transaction","detail":"{read_only:false; response_revision:739; number_of_response:1; }","duration":"224.649898ms","start":"2025-12-19T03:36:07.973879Z","end":"2025-12-19T03:36:08.198529Z","steps":["trace[127230267] 'process raft request'  (duration: 220.283799ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.201283Z","caller":"traceutil/trace.go:172","msg":"trace[259166362] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"220.872935ms","start":"2025-12-19T03:36:07.980329Z","end":"2025-12-19T03:36:08.201202Z","steps":["trace[259166362] 'process raft request'  (duration: 213.916481ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.203565Z","caller":"traceutil/trace.go:172","msg":"trace[937592532] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"221.329918ms","start":"2025-12-19T03:36:07.982223Z","end":"2025-12-19T03:36:08.203553Z","steps":["trace[937592532] 'process raft request'  (duration: 212.060307ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.206645Z","caller":"traceutil/trace.go:172","msg":"trace[1774479602] transaction","detail":"{read_only:false; response_revision:743; number_of_response:1; }","duration":"211.801088ms","start":"2025-12-19T03:36:07.994832Z","end":"2025-12-19T03:36:08.206633Z","steps":["trace[1774479602] 'process raft request'  (duration: 199.478067ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.207607Z","caller":"traceutil/trace.go:172","msg":"trace[290694141] transaction","detail":"{read_only:false; response_revision:745; number_of_response:1; }","duration":"193.621683ms","start":"2025-12-19T03:36:08.013978Z","end":"2025-12-19T03:36:08.207599Z","steps":["trace[290694141] 'process raft request'  (duration: 192.833777ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208012Z","caller":"traceutil/trace.go:172","msg":"trace[1293584312] transaction","detail":"{read_only:false; response_revision:744; number_of_response:1; }","duration":"205.579052ms","start":"2025-12-19T03:36:08.002423Z","end":"2025-12-19T03:36:08.208002Z","steps":["trace[1293584312] 'process raft request'  (duration: 204.327672ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208245Z","caller":"traceutil/trace.go:172","msg":"trace[646696037] transaction","detail":"{read_only:false; response_revision:746; number_of_response:1; }","duration":"193.584631ms","start":"2025-12-19T03:36:08.014650Z","end":"2025-12-19T03:36:08.208234Z","steps":["trace[646696037] 'process raft request'  (duration: 192.191164ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208271Z","caller":"traceutil/trace.go:172","msg":"trace[1269680450] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"192.881447ms","start":"2025-12-19T03:36:08.015385Z","end":"2025-12-19T03:36:08.208266Z","steps":["trace[1269680450] 'process raft request'  (duration: 191.528853ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208357Z","caller":"traceutil/trace.go:172","msg":"trace[1596576251] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"192.815707ms","start":"2025-12-19T03:36:08.015535Z","end":"2025-12-19T03:36:08.208350Z","steps":["trace[1596576251] 'process raft request'  (duration: 191.740222ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:13.865837Z","caller":"traceutil/trace.go:172","msg":"trace[746629949] transaction","detail":"{read_only:false; response_revision:783; number_of_response:1; }","duration":"157.535196ms","start":"2025-12-19T03:36:13.708284Z","end":"2025-12-19T03:36:13.865819Z","steps":["trace[746629949] 'process raft request'  (duration: 157.431943ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:36.482358Z","caller":"traceutil/trace.go:172","msg":"trace[1390622980] linearizableReadLoop","detail":"{readStateIndex:875; appliedIndex:875; }","duration":"106.368191ms","start":"2025-12-19T03:36:36.375949Z","end":"2025-12-19T03:36:36.482317Z","steps":["trace[1390622980] 'read index received'  (duration: 106.360299ms)","trace[1390622980] 'applied index is now lower than readState.Index'  (duration: 7.214µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:36:36.502196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.177833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:36:36.502321Z","caller":"traceutil/trace.go:172","msg":"trace[820395789] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:822; }","duration":"126.374311ms","start":"2025-12-19T03:36:36.375929Z","end":"2025-12-19T03:36:36.502304Z","steps":["trace[820395789] 'agreement among raft nodes before linearized reading'  (duration: 106.606564ms)","trace[820395789] 'range keys from in-memory index tree'  (duration: 19.464267ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:37:25.399551Z","caller":"traceutil/trace.go:172","msg":"trace[366595385] transaction","detail":"{read_only:false; response_revision:886; number_of_response:1; }","duration":"117.892047ms","start":"2025-12-19T03:37:25.281633Z","end":"2025-12-19T03:37:25.399526Z","steps":["trace[366595385] 'process raft request'  (duration: 117.669312ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:37:27.573956Z","caller":"traceutil/trace.go:172","msg":"trace[600816068] transaction","detail":"{read_only:false; response_revision:887; number_of_response:1; }","duration":"160.16188ms","start":"2025-12-19T03:37:27.413773Z","end":"2025-12-19T03:37:27.573935Z","steps":["trace[600816068] 'process raft request'  (duration: 159.958935ms)"],"step_count":1}
	
	
	==> etcd [d76db54c93b485c4b649a151f191b4a25903144828d128d58e6bc856e7adc487] <==
	{"level":"info","ts":"2025-12-19T03:33:52.844068Z","caller":"traceutil/trace.go:172","msg":"trace[172029000] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:439; }","duration":"181.60869ms","start":"2025-12-19T03:33:52.662450Z","end":"2025-12-19T03:33:52.844059Z","steps":["trace[172029000] 'range keys from in-memory index tree'  (duration: 181.2323ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:53.303795Z","caller":"traceutil/trace.go:172","msg":"trace[184832759] linearizableReadLoop","detail":"{readStateIndex:454; appliedIndex:454; }","duration":"139.889112ms","start":"2025-12-19T03:33:53.163890Z","end":"2025-12-19T03:33:53.303779Z","steps":["trace[184832759] 'read index received'  (duration: 139.881161ms)","trace[184832759] 'applied index is now lower than readState.Index'  (duration: 7.202µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:53.303971Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.045735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5663"}
	{"level":"info","ts":"2025-12-19T03:33:53.304011Z","caller":"traceutil/trace.go:172","msg":"trace[497233565] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:439; }","duration":"140.119834ms","start":"2025-12-19T03:33:53.163884Z","end":"2025-12-19T03:33:53.304004Z","steps":["trace[497233565] 'agreement among raft nodes before linearized reading'  (duration: 139.967883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:53.548755Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"244.319989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5026750784611582435 > lease_revoke:<id:45c29b34ab30815c>","response":"size:29"}
	{"level":"info","ts":"2025-12-19T03:33:53.548865Z","caller":"traceutil/trace.go:172","msg":"trace[2032160460] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:454; }","duration":"240.821592ms","start":"2025-12-19T03:33:53.308031Z","end":"2025-12-19T03:33:53.548853Z","steps":["trace[2032160460] 'read index received'  (duration: 55.056µs)","trace[2032160460] 'applied index is now lower than readState.Index'  (duration: 240.765601ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:53.548987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.976509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-728806\" limit:1 ","response":"range_response_count:1 size:3443"}
	{"level":"info","ts":"2025-12-19T03:33:53.549008Z","caller":"traceutil/trace.go:172","msg":"trace[1531602865] range","detail":"{range_begin:/registry/minions/no-preload-728806; range_end:; response_count:1; response_revision:439; }","duration":"241.003574ms","start":"2025-12-19T03:33:53.307997Z","end":"2025-12-19T03:33:53.549001Z","steps":["trace[1531602865] 'agreement among raft nodes before linearized reading'  (duration: 240.890835ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:57.789058Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.388109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5663"}
	{"level":"info","ts":"2025-12-19T03:33:57.789165Z","caller":"traceutil/trace.go:172","msg":"trace[1207472380] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:443; }","duration":"150.508073ms","start":"2025-12-19T03:33:57.638641Z","end":"2025-12-19T03:33:57.789149Z","steps":["trace[1207472380] 'range keys from in-memory index tree'  (duration: 150.192687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:57.789538Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.226884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5663"}
	{"level":"info","ts":"2025-12-19T03:33:57.789611Z","caller":"traceutil/trace.go:172","msg":"trace[722102572] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:443; }","duration":"125.27731ms","start":"2025-12-19T03:33:57.664285Z","end":"2025-12-19T03:33:57.789562Z","steps":["trace[722102572] 'range keys from in-memory index tree'  (duration: 125.115573ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:57.957635Z","caller":"traceutil/trace.go:172","msg":"trace[1444336190] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"145.813738ms","start":"2025-12-19T03:33:57.811801Z","end":"2025-12-19T03:33:57.957614Z","steps":["trace[1444336190] 'process raft request'  (duration: 145.592192ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:58.102313Z","caller":"traceutil/trace.go:172","msg":"trace[560799855] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"135.664689ms","start":"2025-12-19T03:33:57.966630Z","end":"2025-12-19T03:33:58.102295Z","steps":["trace[560799855] 'process raft request'  (duration: 71.392388ms)","trace[560799855] 'compare'  (duration: 64.082773ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.400706Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.886648ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5485"}
	{"level":"info","ts":"2025-12-19T03:33:58.400772Z","caller":"traceutil/trace.go:172","msg":"trace[1007598697] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:445; }","duration":"236.959596ms","start":"2025-12-19T03:33:58.163798Z","end":"2025-12-19T03:33:58.400757Z","steps":["trace[1007598697] 'agreement among raft nodes before linearized reading'  (duration: 42.53694ms)","trace[1007598697] 'range keys from in-memory index tree'  (duration: 194.268766ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.400858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.30338ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5026750784611582482 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" mod_revision:413 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" value_size:4103 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-19T03:33:58.401001Z","caller":"traceutil/trace.go:172","msg":"trace[1203841075] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"431.044616ms","start":"2025-12-19T03:33:57.969947Z","end":"2025-12-19T03:33:58.400992Z","steps":["trace[1203841075] 'process raft request'  (duration: 430.973541ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:58.401132Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:33:57.969935Z","time spent":"431.123085ms","remote":"127.0.0.1:42686","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1258,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-xwqww\" mod_revision:412 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-xwqww\" value_size:1199 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-xwqww\" > >"}
	{"level":"info","ts":"2025-12-19T03:33:58.401330Z","caller":"traceutil/trace.go:172","msg":"trace[1342167826] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"431.477899ms","start":"2025-12-19T03:33:57.969844Z","end":"2025-12-19T03:33:58.401322Z","steps":["trace[1342167826] 'process raft request'  (duration: 236.530374ms)","trace[1342167826] 'compare'  (duration: 194.196062ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.401372Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:33:57.969825Z","time spent":"431.523224ms","remote":"127.0.0.1:43152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4163,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" mod_revision:413 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" value_size:4103 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" > >"}
	{"level":"info","ts":"2025-12-19T03:33:58.557545Z","caller":"traceutil/trace.go:172","msg":"trace[1424643893] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:464; }","duration":"143.2294ms","start":"2025-12-19T03:33:58.414293Z","end":"2025-12-19T03:33:58.557523Z","steps":["trace[1424643893] 'read index received'  (duration: 143.221933ms)","trace[1424643893] 'applied index is now lower than readState.Index'  (duration: 6.576µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.561732Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.446706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:33:58.561774Z","caller":"traceutil/trace.go:172","msg":"trace[25830164] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:447; }","duration":"147.500036ms","start":"2025-12-19T03:33:58.414266Z","end":"2025-12-19T03:33:58.561766Z","steps":["trace[25830164] 'agreement among raft nodes before linearized reading'  (duration: 143.371499ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:58.561889Z","caller":"traceutil/trace.go:172","msg":"trace[1197332737] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"148.544071ms","start":"2025-12-19T03:33:58.413331Z","end":"2025-12-19T03:33:58.561875Z","steps":["trace[1197332737] 'process raft request'  (duration: 144.355051ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:45:26 up 9 min,  0 users,  load average: 0.79, 0.32, 0.20
	Linux no-preload-728806 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [78fce5e539795bf0e161cfaab83885b769767cbc45af0a575c3e2e1d5d2ce929] <==
	I1219 03:33:18.590545       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:33:18.642677       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:33:23.132808       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:23.143632       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:23.176279       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:33:23.541725       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:34:12.954989       1 conn.go:339] Error on socket receive: read tcp 192.168.50.172:8443->192.168.50.1:41766: use of closed network connection
	I1219 03:34:13.676429       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:34:13.690090       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:13.690663       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:34:13.691030       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:34:13.860607       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.108.94.62"}
	W1219 03:34:13.873335       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:13.873739       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:34:13.883054       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:13.883109       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [8e9475c78e5b931b2d174b98c7f3095a1ee185519d00942f21d0ebf9092be1a0] <==
	I1219 03:40:58.167813       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:40:58.168075       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:40:58.168206       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:40:58.169524       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:41:58.168349       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:41:58.168540       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:41:58.168573       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:41:58.170585       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:41:58.170696       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:41:58.170723       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:43:58.169530       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:43:58.169648       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:43:58.169672       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:43:58.170880       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:43:58.170988       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:43:58.170997       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9e837e7d646d3029c97a82f448b8aa058a25d25934e9bc90a5d77e5e64e6b38d] <==
	I1219 03:33:22.405176       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.409261       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.410057       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.410865       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.405825       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.406331       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408392       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408300       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.396387       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.397085       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408403       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408524       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408411       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408418       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.397100       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408534       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408723       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408739       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408746       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.447688       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:33:22.469765       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-728806" podCIDRs=["10.244.0.0/24"]
	I1219 03:33:22.548100       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.594503       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.594536       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:33:22.594543       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [d4b2c0b372751a1bc59d857101659a4a9bfe03f44e49264fc35f2b4bf8d1e6c2] <==
	I1219 03:39:04.129028       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:39:34.008121       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:39:34.139771       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:40:04.016476       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:40:04.151638       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:40:34.025387       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:40:34.161692       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:41:04.031947       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:41:04.170369       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:41:34.038987       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:41:34.182388       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:42:04.046358       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:42:04.194784       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:42:34.054259       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:42:34.206777       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:43:04.061392       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:43:04.217560       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:43:34.068090       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:43:34.227637       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:44:04.074534       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:44:04.238760       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:44:34.080853       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:44:34.253192       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:45:04.087294       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:45:04.266729       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [18e278297341f2ef6879a8f2a0674670948d8c3f6abb39a02ef05bd379a52a3e] <==
	I1219 03:35:59.338718       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:35:59.439239       1 shared_informer.go:377] "Caches are synced"
	I1219 03:35:59.439294       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.172"]
	E1219 03:35:59.439917       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:35:59.492895       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:35:59.492993       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:35:59.493095       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:35:59.503378       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:35:59.504604       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:35:59.504640       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:59.510407       1 config.go:200] "Starting service config controller"
	I1219 03:35:59.510793       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:35:59.511053       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:35:59.511124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:35:59.511310       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:35:59.511373       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:35:59.512229       1 config.go:309] "Starting node config controller"
	I1219 03:35:59.512355       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:35:59.512373       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:35:59.611525       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:35:59.611555       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:35:59.611592       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ba269e021c7b586768ea42947bca487c6a450c93b996c1fef9978ea650ccfa4f] <==
	I1219 03:33:25.374424       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:33:25.475572       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:25.475639       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.172"]
	E1219 03:33:25.475810       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:33:25.574978       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:33:25.575374       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:33:25.575497       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:33:25.589304       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:33:25.590158       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:33:25.590614       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:33:25.601510       1 config.go:200] "Starting service config controller"
	I1219 03:33:25.601721       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:33:25.601933       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:33:25.602061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:33:25.602134       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:33:25.602299       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:33:25.604477       1 config.go:309] "Starting node config controller"
	I1219 03:33:25.604492       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:33:25.702099       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:33:25.702282       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:33:25.702561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:33:25.705300       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [31fe13610d626eb1358e924d8da122c03230ae2e208af3021661987752f9fb4a] <==
	I1219 03:35:55.174512       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:35:56.981884       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:35:56.981984       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:35:56.982135       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:35:56.982144       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:35:57.075904       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:35:57.079777       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:57.087998       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:35:57.088254       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:35:57.092862       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:35:57.088331       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 03:35:57.161946       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 03:35:58.693823       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [e95ce55118a31daf218148578c1b544b8ed677b36adb51f5f11f5c4b4fe7c908] <==
	E1219 03:33:15.625993       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:33:15.626737       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 03:33:15.626778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:33:15.626812       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 03:33:15.628170       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:33:15.628509       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:33:15.628789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:33:15.628987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 03:33:15.629157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:33:16.433594       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 03:33:16.469822       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 03:33:16.481395       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:33:16.491967       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1219 03:33:16.585593       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:33:16.593438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 03:33:16.595573       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:33:16.601532       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 03:33:16.660912       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:33:16.698560       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:33:16.722638       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 03:33:16.855068       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:33:16.875884       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 03:33:16.887586       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:33:16.943310       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	I1219 03:33:19.205442       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:43:50 no-preload-728806 kubelet[1076]: E1219 03:43:50.405821    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-728806" containerName="kube-controller-manager"
	Dec 19 03:43:52 no-preload-728806 kubelet[1076]: E1219 03:43:52.404642    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:43:52 no-preload-728806 kubelet[1076]: E1219 03:43:52.406591    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:43:53 no-preload-728806 kubelet[1076]: E1219 03:43:53.405853    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x9688" containerName="coredns"
	Dec 19 03:44:06 no-preload-728806 kubelet[1076]: E1219 03:44:06.405672    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:44:06 no-preload-728806 kubelet[1076]: E1219 03:44:06.407169    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:44:17 no-preload-728806 kubelet[1076]: E1219 03:44:17.405651    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:44:17 no-preload-728806 kubelet[1076]: E1219 03:44:17.407651    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:44:23 no-preload-728806 kubelet[1076]: E1219 03:44:23.408811    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-728806" containerName="kube-apiserver"
	Dec 19 03:44:29 no-preload-728806 kubelet[1076]: E1219 03:44:29.405542    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-728806" containerName="etcd"
	Dec 19 03:44:32 no-preload-728806 kubelet[1076]: E1219 03:44:32.404553    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:44:32 no-preload-728806 kubelet[1076]: E1219 03:44:32.406278    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:44:46 no-preload-728806 kubelet[1076]: E1219 03:44:46.405515    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-k5gpr" containerName="proxy"
	Dec 19 03:44:46 no-preload-728806 kubelet[1076]: E1219 03:44:46.406548    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:44:46 no-preload-728806 kubelet[1076]: E1219 03:44:46.408587    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:44:50 no-preload-728806 kubelet[1076]: E1219 03:44:50.404579    1076 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-m7bxg" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:45:00 no-preload-728806 kubelet[1076]: E1219 03:45:00.405733    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-728806" containerName="kube-scheduler"
	Dec 19 03:45:00 no-preload-728806 kubelet[1076]: E1219 03:45:00.406027    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:45:00 no-preload-728806 kubelet[1076]: E1219 03:45:00.407581    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:45:08 no-preload-728806 kubelet[1076]: E1219 03:45:08.404873    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-728806" containerName="kube-controller-manager"
	Dec 19 03:45:14 no-preload-728806 kubelet[1076]: E1219 03:45:14.405284    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:45:14 no-preload-728806 kubelet[1076]: E1219 03:45:14.406716    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:45:20 no-preload-728806 kubelet[1076]: E1219 03:45:20.405030    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x9688" containerName="coredns"
	Dec 19 03:45:26 no-preload-728806 kubelet[1076]: E1219 03:45:26.405670    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:45:26 no-preload-728806 kubelet[1076]: E1219 03:45:26.408200    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	
	
	==> kubernetes-dashboard [47bce22eb1c3d05d42b992a32a07de7295a66c8221d28877512dd638f7211103] <==
	E1219 03:43:16.196028       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:44:16.195641       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:45:16.195343       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:42:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:42:56 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:42:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:43:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:43:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:43:26 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:43:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:43:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:43:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:43:56 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:43:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:44:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:44:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:44:26 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:44:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:44:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:44:56 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:45:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:45:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:45:26 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	
	
	==> kubernetes-dashboard [4ab0badeeeac9066abd92e7322c2ebdf2519d10c318b6dbb9979651de9f47051] <==
	I1219 03:36:12.895678       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:36:12.895771       1 init.go:49] Using in-cluster config
	I1219 03:36:12.895970       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [7b0a529acf54eff15dbdb1f47e2ad6584d347a66e1e80bcb4d9664c4444610fd] <==
	I1219 03:36:26.094064       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:36:26.094175       1 init.go:49] Using in-cluster config
	I1219 03:36:26.094617       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:36:26.094634       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:36:26.094642       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:36:26.094648       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:36:26.101505       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:36:26.101675       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:36:26.122225       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:36:26.127891       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [f43ad99d90fdcbc40aff01641cbe5c36c47cb8ac50d0891cd9bb37a3ba555617] <==
	I1219 03:36:22.512883       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:36:22.513145       1 init.go:48] Using in-cluster config
	I1219 03:36:22.513774       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [944453a82c5aa9d3c5e09a2a924586b9708bf3f99f7878d28d61b7e7c1fee4c8] <==
	W1219 03:45:02.303685       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:04.308534       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:04.314293       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:06.318669       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:06.327105       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:08.332142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:08.338113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:10.342178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:10.351034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:12.356182       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:12.362997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:14.366935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:14.378690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:16.383121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:16.388306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:18.392115       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:18.398809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:20.403466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:20.409909       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:22.413852       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:22.421060       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:24.425257       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:24.433727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:26.442506       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:26.452724       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [974c9df3fdb6ebf85707dff617f7db917a0a2dec07eec91af2ef490c42f3aeb8] <==
	I1219 03:35:59.143005       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:36:29.154957       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-728806 -n no-preload-728806
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-728806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-9zx57
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-728806 describe pod metrics-server-5d785b57d4-9zx57
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-728806 describe pod metrics-server-5d785b57d4-9zx57: exit status 1 (68.215789ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-9zx57" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-728806 describe pod metrics-server-5d785b57d4-9zx57: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (542.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.71s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:36:58.466980    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:00.584782    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:00.590104    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:00.600466    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:00.620844    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:00.661835    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:00.742225    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:00.903192    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:01.223817    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:01.864053    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:03.144907    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:05.705415    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:08.865960    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:08.871291    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:08.881626    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:08.902023    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:08.942374    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:09.022809    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:09.183344    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:09.504408    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:10.144857    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:10.826089    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:11.425486    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:13.986292    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:18.947511    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:19.106929    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:21.066951    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:29.347803    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:272: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832734 -n embed-certs-832734
start_stop_delete_test.go:272: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:45:55.879059536 +0000 UTC m=+4842.693688361
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832734 -n embed-certs-832734
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-832734 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-832734 logs -n 25: (1.750182636s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────
───────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────
───────────┤
	│ ssh     │ -p bridge-694633 sudo cat /etc/containerd/config.toml                                                                                                                                                                                             │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo containerd config dump                                                                                                                                                                                                      │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │                     │
	│ ssh     │ -p bridge-694633 sudo systemctl cat crio --no-pager                                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                     │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo crio config                                                                                                                                                                                                                 │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p bridge-694633                                                                                                                                                                                                                                  │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p disable-driver-mounts-477416                                                                                                                                                                                                                   │ disable-driver-mounts-477416 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ stop    │ -p old-k8s-version-638861 --alsologtostderr -v=3                                                                                                                                                                                                  │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-728806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p no-preload-728806 --alsologtostderr -v=3                                                                                                                                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p embed-certs-832734 --alsologtostderr -v=3                                                                                                                                                                                                      │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ stop    │ -p default-k8s-diff-port-382606 --alsologtostderr -v=3                                                                                                                                                                                            │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────
───────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:36:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:36:29.621083   51711 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:36:29.621200   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621205   51711 out.go:374] Setting ErrFile to fd 2...
	I1219 03:36:29.621212   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621491   51711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:36:29.622131   51711 out.go:368] Setting JSON to false
	I1219 03:36:29.623408   51711 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4729,"bootTime":1766110661,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:36:29.623486   51711 start.go:143] virtualization: kvm guest
	I1219 03:36:29.625670   51711 out.go:179] * [default-k8s-diff-port-382606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:36:29.633365   51711 notify.go:221] Checking for updates...
	I1219 03:36:29.633417   51711 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:36:29.635075   51711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:36:29.636942   51711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:29.638374   51711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:36:29.639842   51711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:36:29.641026   51711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:36:29.642747   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:29.643478   51711 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:36:29.700163   51711 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:36:29.701162   51711 start.go:309] selected driver: kvm2
	I1219 03:36:29.701180   51711 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.701323   51711 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:36:29.702837   51711 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:29.702885   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:29.702957   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:29.703020   51711 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.703150   51711 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:36:29.704494   51711 out.go:179] * Starting "default-k8s-diff-port-382606" primary control-plane node in "default-k8s-diff-port-382606" cluster
	I1219 03:36:29.705691   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:29.705751   51711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:36:29.705771   51711 cache.go:65] Caching tarball of preloaded images
	I1219 03:36:29.705892   51711 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:36:29.705927   51711 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:36:29.706078   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:29.706318   51711 start.go:360] acquireMachinesLock for default-k8s-diff-port-382606: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:36:29.706374   51711 start.go:364] duration metric: took 32.309µs to acquireMachinesLock for "default-k8s-diff-port-382606"
	I1219 03:36:29.706388   51711 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:36:29.706395   51711 fix.go:54] fixHost starting: 
	I1219 03:36:29.708913   51711 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382606: state=Stopped err=<nil>
	W1219 03:36:29.708943   51711 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:36:27.974088   51386 addons.go:239] Setting addon default-storageclass=true in "embed-certs-832734"
	W1219 03:36:27.974109   51386 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:36:27.974136   51386 host.go:66] Checking if "embed-certs-832734" exists ...
	I1219 03:36:27.974565   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:36:27.974582   51386 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:36:27.974599   51386 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:27.974608   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:36:27.976663   51386 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:27.976691   51386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:36:27.976771   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.977846   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.977880   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.978136   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.979376   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979747   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979820   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.979860   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980122   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.980448   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.980482   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980686   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.981056   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981521   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.981545   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981792   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:28.331935   51386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:28.393904   51386 node_ready.go:35] waiting up to 6m0s for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398272   51386 node_ready.go:49] node "embed-certs-832734" is "Ready"
	I1219 03:36:28.398297   51386 node_ready.go:38] duration metric: took 4.336343ms for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398310   51386 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:28.398457   51386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:28.475709   51386 api_server.go:72] duration metric: took 507.310055ms to wait for apiserver process to appear ...
	I1219 03:36:28.475751   51386 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:28.475776   51386 api_server.go:253] Checking apiserver healthz at https://192.168.83.196:8443/healthz ...
	I1219 03:36:28.483874   51386 api_server.go:279] https://192.168.83.196:8443/healthz returned 200:
	ok
	I1219 03:36:28.485710   51386 api_server.go:141] control plane version: v1.34.3
	I1219 03:36:28.485738   51386 api_server.go:131] duration metric: took 9.978141ms to wait for apiserver health ...
	I1219 03:36:28.485751   51386 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:36:28.493956   51386 system_pods.go:59] 8 kube-system pods found
	I1219 03:36:28.493996   51386 system_pods.go:61] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.494024   51386 system_pods.go:61] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.494037   51386 system_pods.go:61] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.494044   51386 system_pods.go:61] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.494052   51386 system_pods.go:61] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.494058   51386 system_pods.go:61] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.494064   51386 system_pods.go:61] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.494074   51386 system_pods.go:61] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.494080   51386 system_pods.go:74] duration metric: took 8.32329ms to wait for pod list to return data ...
	I1219 03:36:28.494090   51386 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:36:28.500269   51386 default_sa.go:45] found service account: "default"
	I1219 03:36:28.500298   51386 default_sa.go:55] duration metric: took 6.200379ms for default service account to be created ...
	I1219 03:36:28.500309   51386 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:36:28.601843   51386 system_pods.go:86] 8 kube-system pods found
	I1219 03:36:28.601871   51386 system_pods.go:89] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.601880   51386 system_pods.go:89] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.601887   51386 system_pods.go:89] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.601892   51386 system_pods.go:89] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.601896   51386 system_pods.go:89] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.601902   51386 system_pods.go:89] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.601921   51386 system_pods.go:89] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.601930   51386 system_pods.go:89] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.601938   51386 system_pods.go:126] duration metric: took 101.621956ms to wait for k8s-apps to be running ...
	I1219 03:36:28.601947   51386 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:36:28.602031   51386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:36:28.618616   51386 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:36:28.685146   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:36:28.685175   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:36:28.694410   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:28.696954   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:28.726390   51386 system_svc.go:56] duration metric: took 124.434217ms WaitForService to wait for kubelet
	I1219 03:36:28.726426   51386 kubeadm.go:587] duration metric: took 758.032732ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:28.726450   51386 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:36:28.726520   51386 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:36:28.739364   51386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:36:28.739393   51386 node_conditions.go:123] node cpu capacity is 2
	I1219 03:36:28.739407   51386 node_conditions.go:105] duration metric: took 12.951551ms to run NodePressure ...
	I1219 03:36:28.739421   51386 start.go:242] waiting for startup goroutines ...
	I1219 03:36:28.774949   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:36:28.774981   51386 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:36:28.896758   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:28.896785   51386 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:36:29.110522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:31.016418   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.319423876s)
	I1219 03:36:31.016497   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.322025841s)
	I1219 03:36:31.016534   51386 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.28998192s)
	I1219 03:36:31.016597   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.906047637s)
	I1219 03:36:31.016610   51386 addons.go:500] Verifying addon metrics-server=true in "embed-certs-832734"
	I1219 03:36:31.016613   51386 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:36:29.711054   51711 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-382606" ...
	I1219 03:36:29.711101   51711 main.go:144] libmachine: starting domain...
	I1219 03:36:29.711116   51711 main.go:144] libmachine: ensuring networks are active...
	I1219 03:36:29.712088   51711 main.go:144] libmachine: Ensuring network default is active
	I1219 03:36:29.712549   51711 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-382606 is active
	I1219 03:36:29.713312   51711 main.go:144] libmachine: getting domain XML...
	I1219 03:36:29.714943   51711 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-382606</name>
	  <uuid>342506c1-9e12-4922-9438-23d9d57eea28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/default-k8s-diff-port-382606.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:fb:a4:4e'/>
	      <source network='mk-default-k8s-diff-port-382606'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:57:4f:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:36:31.342655   51711 main.go:144] libmachine: waiting for domain to start...
	I1219 03:36:31.345734   51711 main.go:144] libmachine: domain is now running
	I1219 03:36:31.345778   51711 main.go:144] libmachine: waiting for IP...
	I1219 03:36:31.347227   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348141   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has current primary IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348163   51711 main.go:144] libmachine: found domain IP: 192.168.72.129
	I1219 03:36:31.348170   51711 main.go:144] libmachine: reserving static IP address...
	I1219 03:36:31.348677   51711 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.348704   51711 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-382606 - found existing host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"}
	I1219 03:36:31.348713   51711 main.go:144] libmachine: reserved static IP address 192.168.72.129 for domain default-k8s-diff-port-382606
	I1219 03:36:31.348731   51711 main.go:144] libmachine: waiting for SSH...
	I1219 03:36:31.348741   51711 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:36:31.351582   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352122   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.352155   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352422   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:31.352772   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:31.352782   51711 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:36:34.417281   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:31.980522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:36:35.707529   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.726958549s)
	I1219 03:36:35.707614   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:36:36.641432   51386 addons.go:500] Verifying addon dashboard=true in "embed-certs-832734"
	I1219 03:36:36.645285   51386 out.go:179] * Verifying dashboard addon...
	I1219 03:36:36.647847   51386 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:36:36.659465   51386 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:36:36.659491   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.154819   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.652042   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.152461   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.651730   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.152475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.652155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.153311   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.652427   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:41.151837   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.497282   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:43.498703   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: connection refused
	I1219 03:36:41.654155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.154727   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.653186   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.152647   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.651177   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.154241   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.651752   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:45.152244   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.124796   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.151832   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.628602   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:46.632304   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.632730   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.632753   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.633056   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:46.633240   51711 machine.go:94] provisionDockerMachine start ...
	I1219 03:36:46.635441   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.635889   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.635934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.636109   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.636298   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.636308   51711 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:36:46.752911   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:36:46.752937   51711 buildroot.go:166] provisioning hostname "default-k8s-diff-port-382606"
	I1219 03:36:46.756912   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757425   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.757463   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757703   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.757935   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.757955   51711 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382606 && echo "default-k8s-diff-port-382606" | sudo tee /etc/hostname
	I1219 03:36:46.902266   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382606
	
	I1219 03:36:46.905791   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906293   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.906323   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906555   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.906758   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.906774   51711 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382606/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:36:47.045442   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:47.045472   51711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5003/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5003/.minikube}
	I1219 03:36:47.045496   51711 buildroot.go:174] setting up certificates
	I1219 03:36:47.045505   51711 provision.go:84] configureAuth start
	I1219 03:36:47.049643   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.050087   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.050115   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.052980   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053377   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.053417   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053596   51711 provision.go:143] copyHostCerts
	I1219 03:36:47.053653   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem, removing ...
	I1219 03:36:47.053678   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem
	I1219 03:36:47.053772   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem (1082 bytes)
	I1219 03:36:47.053902   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem, removing ...
	I1219 03:36:47.053919   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem
	I1219 03:36:47.053949   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem (1123 bytes)
	I1219 03:36:47.054027   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem, removing ...
	I1219 03:36:47.054036   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem
	I1219 03:36:47.054059   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem (1675 bytes)
	I1219 03:36:47.054113   51711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382606 san=[127.0.0.1 192.168.72.129 default-k8s-diff-port-382606 localhost minikube]
	I1219 03:36:47.093786   51711 provision.go:177] copyRemoteCerts
	I1219 03:36:47.093848   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:36:47.096938   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097402   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.097443   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097608   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.187589   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:36:47.229519   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:36:47.264503   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:36:47.294746   51711 provision.go:87] duration metric: took 249.22829ms to configureAuth
	I1219 03:36:47.294772   51711 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:36:47.294974   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:47.294990   51711 machine.go:97] duration metric: took 661.738495ms to provisionDockerMachine
	I1219 03:36:47.295000   51711 start.go:293] postStartSetup for "default-k8s-diff-port-382606" (driver="kvm2")
	I1219 03:36:47.295020   51711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:36:47.295079   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:36:47.297915   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298388   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.298414   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298592   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.391351   51711 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:36:47.396636   51711 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:36:47.396664   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/addons for local assets ...
	I1219 03:36:47.396734   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/files for local assets ...
	I1219 03:36:47.396833   51711 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem -> 89782.pem in /etc/ssl/certs
	I1219 03:36:47.396981   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:36:47.414891   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:47.450785   51711 start.go:296] duration metric: took 155.770681ms for postStartSetup
	I1219 03:36:47.450829   51711 fix.go:56] duration metric: took 17.744433576s for fixHost
	I1219 03:36:47.453927   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454408   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.454438   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454581   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:47.454774   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:47.454784   51711 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:36:47.578960   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766115407.541226750
	
	I1219 03:36:47.578984   51711 fix.go:216] guest clock: 1766115407.541226750
	I1219 03:36:47.578993   51711 fix.go:229] Guest: 2025-12-19 03:36:47.54122675 +0000 UTC Remote: 2025-12-19 03:36:47.450834556 +0000 UTC m=+17.907032910 (delta=90.392194ms)
	I1219 03:36:47.579033   51711 fix.go:200] guest clock delta is within tolerance: 90.392194ms
	I1219 03:36:47.579039   51711 start.go:83] releasing machines lock for "default-k8s-diff-port-382606", held for 17.872657006s
	I1219 03:36:47.582214   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.582699   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.582737   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.583361   51711 ssh_runner.go:195] Run: cat /version.json
	I1219 03:36:47.583439   51711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:36:47.586735   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.586965   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587209   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587236   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587400   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.587637   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587663   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587852   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.701374   51711 ssh_runner.go:195] Run: systemctl --version
	I1219 03:36:47.707956   51711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:36:47.714921   51711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:36:47.714993   51711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:36:47.736464   51711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:36:47.736487   51711 start.go:496] detecting cgroup driver to use...
	I1219 03:36:47.736550   51711 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:36:47.771913   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:36:47.789225   51711 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:36:47.789292   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:36:47.814503   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:36:47.832961   51711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:36:48.004075   51711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:36:48.227207   51711 docker.go:234] disabling docker service ...
	I1219 03:36:48.227297   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:36:48.245923   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:36:48.261992   51711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:36:48.443743   51711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:36:48.627983   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:36:48.647391   51711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:36:48.673139   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:36:48.690643   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:36:48.703896   51711 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 03:36:48.703949   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 03:36:48.718567   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.732932   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:36:48.749170   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.772676   51711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:36:48.787125   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:36:48.800190   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:36:48.812900   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:36:48.826147   51711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:36:48.841046   51711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:36:48.841107   51711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:36:48.867440   51711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:36:48.879351   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:49.048166   51711 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:36:49.092003   51711 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:36:49.092122   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:49.098374   51711 retry.go:31] will retry after 1.402478088s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1219 03:36:50.501086   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:50.509026   51711 start.go:564] Will wait 60s for crictl version
	I1219 03:36:50.509089   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:50.514426   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:36:50.554888   51711 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:36:50.554956   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.583326   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.611254   51711 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:36:46.651075   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.206126   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.654221   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.152458   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.651475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.152863   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.655859   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.152073   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.655613   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.153352   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.653895   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.151537   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.653336   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.156131   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.652752   51386 kapi.go:107] duration metric: took 17.00490252s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:36:53.654689   51386 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-832734 addons enable metrics-server
	
	I1219 03:36:53.656077   51386 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1219 03:36:50.615098   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615498   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:50.615532   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615798   51711 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1219 03:36:50.620834   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.637469   51711 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:36:50.637614   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:50.637684   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.668556   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.668578   51711 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:36:50.668632   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.703466   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.703488   51711 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:36:50.703495   51711 kubeadm.go:935] updating node { 192.168.72.129 8444 v1.34.3 containerd true true} ...
	I1219 03:36:50.703585   51711 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-382606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:36:50.703648   51711 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:36:50.734238   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:50.734260   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:50.734277   51711 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:36:50.734306   51711 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382606 NodeName:default-k8s-diff-port-382606 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:36:50.734471   51711 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-382606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.129"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:36:50.734558   51711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:36:50.746945   51711 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:36:50.746995   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:36:50.758948   51711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1219 03:36:50.782923   51711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:36:50.807164   51711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1219 03:36:50.829562   51711 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I1219 03:36:50.833888   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.849703   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:51.014216   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:51.062118   51711 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606 for IP: 192.168.72.129
	I1219 03:36:51.062147   51711 certs.go:195] generating shared ca certs ...
	I1219 03:36:51.062168   51711 certs.go:227] acquiring lock for ca certs: {Name:mk6db7e23547b9013e447eaa0ddba18e05213211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.062409   51711 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key
	I1219 03:36:51.062517   51711 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key
	I1219 03:36:51.062542   51711 certs.go:257] generating profile certs ...
	I1219 03:36:51.062681   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/client.key
	I1219 03:36:51.062791   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key.13c41c2b
	I1219 03:36:51.062855   51711 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key
	I1219 03:36:51.063062   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem (1338 bytes)
	W1219 03:36:51.063113   51711 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978_empty.pem, impossibly tiny 0 bytes
	I1219 03:36:51.063130   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:36:51.063176   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:36:51.063218   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:36:51.063256   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem (1675 bytes)
	I1219 03:36:51.063324   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:51.064049   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:36:51.108621   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:36:51.164027   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:36:51.199337   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:36:51.234216   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:36:51.283158   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:36:51.314148   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:36:51.344498   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:36:51.374002   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:36:51.403858   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem --> /usr/share/ca-certificates/8978.pem (1338 bytes)
	I1219 03:36:51.438346   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /usr/share/ca-certificates/89782.pem (1708 bytes)
	I1219 03:36:51.476174   51711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:36:51.499199   51711 ssh_runner.go:195] Run: openssl version
	I1219 03:36:51.506702   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.518665   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8978.pem /etc/ssl/certs/8978.pem
	I1219 03:36:51.530739   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536107   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:37 /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536167   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.543417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:36:51.554750   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8978.pem /etc/ssl/certs/51391683.0
	I1219 03:36:51.566106   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.577342   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89782.pem /etc/ssl/certs/89782.pem
	I1219 03:36:51.588583   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594342   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:37 /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594386   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.602417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.614493   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89782.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.626108   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.638273   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:36:51.650073   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655546   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655600   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.662728   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:36:51.675457   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:36:51.687999   51711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:36:51.693178   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:36:51.700656   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:36:51.708623   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:36:51.715865   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:36:51.725468   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:36:51.732847   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:36:51.739988   51711 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:51.740068   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:36:51.740145   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.779756   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.779780   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.779786   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.779790   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.779794   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.779800   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.779804   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.779808   51711 cri.go:92] found id: ""
	I1219 03:36:51.779864   51711 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1219 03:36:51.796814   51711 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:36:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1219 03:36:51.796914   51711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:36:51.809895   51711 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:36:51.809912   51711 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:36:51.809956   51711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:36:51.821465   51711 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:36:51.822684   51711 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382606" does not appear in /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:51.823576   51711 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5003/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382606" cluster setting kubeconfig missing "default-k8s-diff-port-382606" context setting]
	I1219 03:36:51.824679   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.826925   51711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:36:51.838686   51711 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.129
	I1219 03:36:51.838723   51711 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:36:51.838740   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1219 03:36:51.838793   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.874959   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.874981   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.874995   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.874998   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.875001   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.875004   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.875019   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.875022   51711 cri.go:92] found id: ""
	I1219 03:36:51.875027   51711 cri.go:255] Stopping containers: [64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c]
	I1219 03:36:51.875080   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:51.879700   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c
	I1219 03:36:51.939513   51711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:36:51.985557   51711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:36:51.999714   51711 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:36:51.999739   51711 kubeadm.go:158] found existing configuration files:
	
	I1219 03:36:51.999807   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:36:52.011529   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:36:52.011594   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:36:52.023630   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:36:52.036507   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:36:52.036566   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:36:52.048019   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.061421   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:36:52.061498   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.073436   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:36:52.084186   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:36:52.084244   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:36:52.098426   51711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:36:52.111056   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:52.261515   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.323343   51711 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.061779829s)
	I1219 03:36:54.323428   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.593075   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:53.657242   51386 addons.go:546] duration metric: took 25.688774629s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1219 03:36:53.657289   51386 start.go:247] waiting for cluster config update ...
	I1219 03:36:53.657306   51386 start.go:256] writing updated cluster config ...
	I1219 03:36:53.657575   51386 ssh_runner.go:195] Run: rm -f paused
	I1219 03:36:53.663463   51386 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:53.667135   51386 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.672738   51386 pod_ready.go:94] pod "coredns-66bc5c9577-4csbt" is "Ready"
	I1219 03:36:53.672765   51386 pod_ready.go:86] duration metric: took 5.607283ms for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.675345   51386 pod_ready.go:83] waiting for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.679709   51386 pod_ready.go:94] pod "etcd-embed-certs-832734" is "Ready"
	I1219 03:36:53.679732   51386 pod_ready.go:86] duration metric: took 4.36675ms for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.681513   51386 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.685784   51386 pod_ready.go:94] pod "kube-apiserver-embed-certs-832734" is "Ready"
	I1219 03:36:53.685803   51386 pod_ready.go:86] duration metric: took 4.273628ms for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.688112   51386 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.068844   51386 pod_ready.go:94] pod "kube-controller-manager-embed-certs-832734" is "Ready"
	I1219 03:36:54.068878   51386 pod_ready.go:86] duration metric: took 380.74628ms for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.268799   51386 pod_ready.go:83] waiting for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.668935   51386 pod_ready.go:94] pod "kube-proxy-j49gn" is "Ready"
	I1219 03:36:54.668971   51386 pod_ready.go:86] duration metric: took 400.137967ms for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.868862   51386 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269481   51386 pod_ready.go:94] pod "kube-scheduler-embed-certs-832734" is "Ready"
	I1219 03:36:55.269512   51386 pod_ready.go:86] duration metric: took 400.62266ms for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269530   51386 pod_ready.go:40] duration metric: took 1.60604049s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:55.329865   51386 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:36:55.331217   51386 out.go:179] * Done! kubectl is now configured to use "embed-certs-832734" cluster and "default" namespace by default
	I1219 03:36:54.658040   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.764830   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:54.764901   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.265628   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.765546   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.265137   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.294858   51711 api_server.go:72] duration metric: took 1.53003596s to wait for apiserver process to appear ...
	I1219 03:36:56.294894   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:56.294920   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:56.295516   51711 api_server.go:269] stopped: https://192.168.72.129:8444/healthz: Get "https://192.168.72.129:8444/healthz": dial tcp 192.168.72.129:8444: connect: connection refused
	I1219 03:36:56.795253   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.818365   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.818396   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:36:59.818426   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.867609   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.867642   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:37:00.295133   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.300691   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.300720   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:00.795111   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.825034   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.825068   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.295554   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.307047   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.307078   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.795401   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.800055   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.800091   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.295888   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.302103   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.302125   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.795818   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.802296   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.802326   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:03.296021   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:03.301661   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:03.310379   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:03.310412   51711 api_server.go:131] duration metric: took 7.01550899s to wait for apiserver health ...
	I1219 03:37:03.310425   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:37:03.310437   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:37:03.312477   51711 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:37:03.313819   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:37:03.331177   51711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:37:03.360466   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:03.365800   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:03.365852   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:37:03.365866   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:03.365876   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:03.365889   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:37:03.365896   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:03.365910   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:03.365918   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:03.365924   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:03.365935   51711 system_pods.go:74] duration metric: took 5.441032ms to wait for pod list to return data ...
	I1219 03:37:03.365944   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:03.369512   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:03.369539   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:03.369553   51711 node_conditions.go:105] duration metric: took 3.601059ms to run NodePressure ...
	I1219 03:37:03.369618   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:37:03.647329   51711 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651092   51711 kubeadm.go:744] kubelet initialised
	I1219 03:37:03.651116   51711 kubeadm.go:745] duration metric: took 3.75629ms waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651137   51711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:37:03.667607   51711 ops.go:34] apiserver oom_adj: -16
	I1219 03:37:03.667629   51711 kubeadm.go:602] duration metric: took 11.857709737s to restartPrimaryControlPlane
	I1219 03:37:03.667638   51711 kubeadm.go:403] duration metric: took 11.927656699s to StartCluster
	I1219 03:37:03.667662   51711 settings.go:142] acquiring lock: {Name:mk7f7ba85357bfc9fca2e66b70b16d967ca355d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.667744   51711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:37:03.669684   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.669943   51711 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:37:03.670026   51711 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:37:03.670125   51711 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670141   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:37:03.670153   51711 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670165   51711 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670174   51711 addons.go:248] addon metrics-server should already be in state true
	I1219 03:37:03.670145   51711 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382606"
	I1219 03:37:03.670175   51711 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670219   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.670222   51711 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382606"
	I1219 03:37:03.670185   51711 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670315   51711 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670328   51711 addons.go:248] addon dashboard should already be in state true
	I1219 03:37:03.670352   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	W1219 03:37:03.670200   51711 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:37:03.670428   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.671212   51711 out.go:179] * Verifying Kubernetes components...
	I1219 03:37:03.672712   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:37:03.673624   51711 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:03.673642   51711 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:37:03.674241   51711 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:37:03.674256   51711 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:37:03.674842   51711 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.674857   51711 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:37:03.674871   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.675431   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:37:03.675448   51711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:37:03.675481   51711 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:03.675502   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:37:03.677064   51711 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:03.677081   51711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:37:03.677620   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678481   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.678567   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678872   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.680203   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680419   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680904   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.680934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681162   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681407   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681444   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681467   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681685   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681950   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681982   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.682175   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.929043   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:37:03.969693   51711 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:04.174684   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:04.182529   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:04.184635   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:37:04.184660   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:37:04.197532   51711 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:37:04.242429   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:37:04.242455   51711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:37:04.309574   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:04.309600   51711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:37:04.367754   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:05.660040   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.485300577s)
	I1219 03:37:05.660070   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.477513606s)
	I1219 03:37:05.660116   51711 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.462552784s)
	I1219 03:37:05.660185   51711 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:37:05.673056   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.305263658s)
	I1219 03:37:05.673098   51711 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-382606"
	I1219 03:37:05.673137   51711 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	W1219 03:37:05.974619   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:06.630759   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	W1219 03:37:08.472974   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:10.195765   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.56493028s)
	I1219 03:37:10.195868   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:10.536948   51711 node_ready.go:49] node "default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:10.536984   51711 node_ready.go:38] duration metric: took 6.567254454s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:10.536999   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:37:10.537074   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:37:10.631962   51711 api_server.go:72] duration metric: took 6.961979571s to wait for apiserver process to appear ...
	I1219 03:37:10.631998   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:37:10.632041   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:10.633102   51711 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-382606"
	I1219 03:37:10.637827   51711 out.go:179] * Verifying dashboard addon...
	I1219 03:37:10.641108   51711 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:37:10.648897   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:10.650072   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:10.650099   51711 api_server.go:131] duration metric: took 18.093601ms to wait for apiserver health ...
	I1219 03:37:10.650110   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:10.655610   51711 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:37:10.655627   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:10.657971   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:10.657998   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.658023   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.658033   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.658042   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.658048   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.658055   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.658064   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.658069   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.658080   51711 system_pods.go:74] duration metric: took 7.963499ms to wait for pod list to return data ...
	I1219 03:37:10.658089   51711 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:37:10.668090   51711 default_sa.go:45] found service account: "default"
	I1219 03:37:10.668118   51711 default_sa.go:55] duration metric: took 10.020956ms for default service account to be created ...
	I1219 03:37:10.668130   51711 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:37:10.680469   51711 system_pods.go:86] 8 kube-system pods found
	I1219 03:37:10.680493   51711 system_pods.go:89] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.680507   51711 system_pods.go:89] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.680513   51711 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.680520   51711 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.680525   51711 system_pods.go:89] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.680532   51711 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.680540   51711 system_pods.go:89] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.680555   51711 system_pods.go:89] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.680567   51711 system_pods.go:126] duration metric: took 12.428884ms to wait for k8s-apps to be running ...
	I1219 03:37:10.680577   51711 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:37:10.680634   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:37:10.723844   51711 system_svc.go:56] duration metric: took 43.258925ms WaitForService to wait for kubelet
	I1219 03:37:10.723871   51711 kubeadm.go:587] duration metric: took 7.05389644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:37:10.723887   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:10.731598   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:10.731620   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:10.731629   51711 node_conditions.go:105] duration metric: took 7.738835ms to run NodePressure ...
	I1219 03:37:10.731640   51711 start.go:242] waiting for startup goroutines ...
	I1219 03:37:11.145699   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:11.645111   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.144952   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.644987   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.151074   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.645695   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.146399   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.645725   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.146044   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.645372   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.145700   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.645126   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.145189   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.645089   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.151071   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.645879   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.145525   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.645572   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.144405   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.647145   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.145368   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.653732   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.146443   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.645800   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.145131   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.644929   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.145023   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.646072   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.145868   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.647994   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.147617   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.648227   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.149067   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.645432   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.145986   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.645392   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:29.149926   51711 kapi.go:107] duration metric: took 18.508817791s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:37:29.152664   51711 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382606 addons enable metrics-server
	
	I1219 03:37:29.153867   51711 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:37:29.155085   51711 addons.go:546] duration metric: took 25.485078365s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:37:29.155131   51711 start.go:247] waiting for cluster config update ...
	I1219 03:37:29.155147   51711 start.go:256] writing updated cluster config ...
	I1219 03:37:29.156022   51711 ssh_runner.go:195] Run: rm -f paused
	I1219 03:37:29.170244   51711 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:29.178962   51711 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.186205   51711 pod_ready.go:94] pod "coredns-66bc5c9577-bzq6s" is "Ready"
	I1219 03:37:29.186234   51711 pod_ready.go:86] duration metric: took 7.24885ms for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.280615   51711 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.286426   51711 pod_ready.go:94] pod "etcd-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.286446   51711 pod_ready.go:86] duration metric: took 5.805885ms for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.288885   51711 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.293769   51711 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.293787   51711 pod_ready.go:86] duration metric: took 4.884445ms for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.296432   51711 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.576349   51711 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.576388   51711 pod_ready.go:86] duration metric: took 279.933458ms for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.777084   51711 pod_ready.go:83] waiting for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.176016   51711 pod_ready.go:94] pod "kube-proxy-vhml9" is "Ready"
	I1219 03:37:30.176047   51711 pod_ready.go:86] duration metric: took 398.930848ms for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.377206   51711 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776837   51711 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:30.776861   51711 pod_ready.go:86] duration metric: took 399.600189ms for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776872   51711 pod_ready.go:40] duration metric: took 1.606601039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:30.827211   51711 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:37:30.828493   51711 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-382606" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	a22c8439567b6       6e38f40d628db       8 minutes ago       Running             storage-provisioner                    2                   257f02e5c309a       storage-provisioner                                     kube-system
	e1e5b294ce0f7       d9cbc9f4053ca       8 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   816b3bed82d12       kubernetes-dashboard-metrics-scraper-7685fd8b77-hjg9p   kubernetes-dashboard
	4a8a4c0810150       dd54374d0ab14       8 minutes ago       Running             kubernetes-dashboard-auth              0                   a98047015975b       kubernetes-dashboard-auth-5dd694bb47-w8bnh              kubernetes-dashboard
	a8e8ce1f347b7       a0607af4fcd8a       8 minutes ago       Running             kubernetes-dashboard-api               0                   0e7ea101ca745       kubernetes-dashboard-api-6549569bf5-86vvf               kubernetes-dashboard
	633ebe42c481f       59f642f485d26       9 minutes ago       Running             kubernetes-dashboard-web               0                   cfdd3c0920783       kubernetes-dashboard-web-5c9f966b98-z4wvm               kubernetes-dashboard
	bf9b8c16fb0f9       3a975970da2f5       9 minutes ago       Running             proxy                                  0                   6d4b0b1658432       kubernetes-dashboard-kong-9849c64bd-8sndd               kubernetes-dashboard
	439ed4dabc331       3a975970da2f5       9 minutes ago       Exited              clear-stale-pid                        0                   6d4b0b1658432       kubernetes-dashboard-kong-9849c64bd-8sndd               kubernetes-dashboard
	6a1898be03e51       52546a367cc9e       9 minutes ago       Running             coredns                                1                   a2dd723df1281       coredns-66bc5c9577-4csbt                                kube-system
	772d872ebeddd       56cc512116c8f       9 minutes ago       Running             busybox                                1                   9033c74168050       busybox                                                 default
	42e0e0df29296       6e38f40d628db       9 minutes ago       Exited              storage-provisioner                    1                   257f02e5c309a       storage-provisioner                                     kube-system
	9cb8a6c954574       36eef8e07bdd6       9 minutes ago       Running             kube-proxy                             1                   418e1caa3fec3       kube-proxy-j49gn                                        kube-system
	376bae94b419b       a3e246e9556e9       9 minutes ago       Running             etcd                                   1                   39a6b45405e06       etcd-embed-certs-832734                                 kube-system
	c4fe189224bd9       aec12dadf56dd       9 minutes ago       Running             kube-scheduler                         1                   62a0ab5babbe8       kube-scheduler-embed-certs-832734                       kube-system
	cc6fed85dd6b5       5826b25d990d7       9 minutes ago       Running             kube-controller-manager                1                   7b805c6dcca16       kube-controller-manager-embed-certs-832734              kube-system
	ecf7299638b47       aa27095f56193       9 minutes ago       Running             kube-apiserver                         1                   f7f61a577f0ad       kube-apiserver-embed-certs-832734                       kube-system
	5f35a042b5286       56cc512116c8f       11 minutes ago      Exited              busybox                                0                   bfb134b27d558       busybox                                                 default
	c5fb9f28eccc3       52546a367cc9e       12 minutes ago      Exited              coredns                                0                   189caf062373f       coredns-66bc5c9577-4csbt                                kube-system
	dfe3b60326d13       36eef8e07bdd6       12 minutes ago      Exited              kube-proxy                             0                   d24824eabb273       kube-proxy-j49gn                                        kube-system
	b1029f222f9bf       aec12dadf56dd       12 minutes ago      Exited              kube-scheduler                         0                   cbbddc552a8fb       kube-scheduler-embed-certs-832734                       kube-system
	08a7af5b4c31b       a3e246e9556e9       12 minutes ago      Exited              etcd                                   0                   841ebeae13edb       etcd-embed-certs-832734                                 kube-system
	d9f3752c9cb6f       5826b25d990d7       12 minutes ago      Exited              kube-controller-manager                0                   e7f7c3dcd1b64       kube-controller-manager-embed-certs-832734              kube-system
	fa3f43f32d054       aa27095f56193       12 minutes ago      Exited              kube-apiserver                         0                   8d0ccb0e0e4aa       kube-apiserver-embed-certs-832734                       kube-system
	
	
	==> containerd <==
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.232064039Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod6b0e0afa2f4a6a7cf649b449bcc0d1b8/376bae94b419b9be5bfcc2679b4605fcf724678ed94fcf6a02943ed3e2d9f50b/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.233582743Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda9f5c75f-441e-47fa-9e9a-e7720a9da989/bf9b8c16fb0f9e189114e750adde7d419cb0dfaa4ff8f92fd8aba24449dee8d6/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.234689988Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2af670ae-dcc8-4da1-87cc-c1c3a8588ee0/633ebe42c481f61adb312abdccb0ac35f4bc0f5b69e714c39223515087903512/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.235654145Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poddba889cc-f53c-47fe-ae78-cb48e17b1acb/9cb8a6c95457431429dd54a908e9cb7a9c7dd5256dcae18d99d7e3d2fb0f22b2/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.236577605Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod742c8d21-619e-4ced-af0f-72f096b866e6/6a1898be03e51c5d313e62713bcc4c9aeaaa84b8943addcc4cde5fe7c086b72e/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.237791301Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf9de80f2-143f-4f76-95c5-4ecfc46fdd1c/a8e8ce1f347b77e7ad8788a561119d579e0b72751ead05097ce5a0e60cbed4ca/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.238612541Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd9d18cd1-0e5d-48d7-a240-8dfe94ebe90b/4a8a4c08101504a74628dbbae888596631f3647072434cbfae4baf25f7a85130/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.239658239Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod1c489208-b4ab-4f27-b914-d4930d027443/e1e5b294ce0f74f6a3ec3a5cdde7b2d1ba619235fbf1d30702b061acf1d8ba8e/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.240725573Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod0dbc166e73ceb9ece62835f572ea5535/cc6fed85dd6b5fb4d8e3de856fd8ad48a3fb14ea5f01bb1a92f6abd126e0857a/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.241860714Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3c76a61c528d30b40219645dcc0b5583/c4fe189224bd9d23a20f2ea005344414664cba5082381682e72e83376eda78a8/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.243003245Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podefd499d3-fc07-4168-a175-0bee365b79f1/a22c8439567b6522d327858bdb7e780937b3476aba6b99efe142d2e68f041b48/hugetlb.2MB.events\""
	Dec 19 03:45:42 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:42.244865595Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod1da2ff6b-f366-4fcb-9aff-6f252b564072/772d872ebeddd80f97840072fc41e94b74b5a9151161d86fd99199e2350f7cac/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.272590149Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod914ccaafe35f5d66310b2948bacbdd6b/ecf7299638b47a87a73b63a8a145b6d5d7a55a4ec2f83e8f2cf6517b605575ee/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.275030684Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod6b0e0afa2f4a6a7cf649b449bcc0d1b8/376bae94b419b9be5bfcc2679b4605fcf724678ed94fcf6a02943ed3e2d9f50b/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.277657658Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda9f5c75f-441e-47fa-9e9a-e7720a9da989/bf9b8c16fb0f9e189114e750adde7d419cb0dfaa4ff8f92fd8aba24449dee8d6/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.279247549Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2af670ae-dcc8-4da1-87cc-c1c3a8588ee0/633ebe42c481f61adb312abdccb0ac35f4bc0f5b69e714c39223515087903512/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.280732316Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poddba889cc-f53c-47fe-ae78-cb48e17b1acb/9cb8a6c95457431429dd54a908e9cb7a9c7dd5256dcae18d99d7e3d2fb0f22b2/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.281652634Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod742c8d21-619e-4ced-af0f-72f096b866e6/6a1898be03e51c5d313e62713bcc4c9aeaaa84b8943addcc4cde5fe7c086b72e/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.283119865Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf9de80f2-143f-4f76-95c5-4ecfc46fdd1c/a8e8ce1f347b77e7ad8788a561119d579e0b72751ead05097ce5a0e60cbed4ca/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.284102527Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd9d18cd1-0e5d-48d7-a240-8dfe94ebe90b/4a8a4c08101504a74628dbbae888596631f3647072434cbfae4baf25f7a85130/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.284890474Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod1c489208-b4ab-4f27-b914-d4930d027443/e1e5b294ce0f74f6a3ec3a5cdde7b2d1ba619235fbf1d30702b061acf1d8ba8e/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.286056268Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod0dbc166e73ceb9ece62835f572ea5535/cc6fed85dd6b5fb4d8e3de856fd8ad48a3fb14ea5f01bb1a92f6abd126e0857a/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.287648518Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3c76a61c528d30b40219645dcc0b5583/c4fe189224bd9d23a20f2ea005344414664cba5082381682e72e83376eda78a8/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.288683524Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podefd499d3-fc07-4168-a175-0bee365b79f1/a22c8439567b6522d327858bdb7e780937b3476aba6b99efe142d2e68f041b48/hugetlb.2MB.events\""
	Dec 19 03:45:52 embed-certs-832734 containerd[721]: time="2025-12-19T03:45:52.290084651Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod1da2ff6b-f366-4fcb-9aff-6f252b564072/772d872ebeddd80f97840072fc41e94b74b5a9151161d86fd99199e2350f7cac/hugetlb.2MB.events\""
	
	
	==> coredns [6a1898be03e51c5d313e62713bcc4c9aeaaa84b8943addcc4cde5fe7c086b72e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55890 - 44584 "HINFO IN 8132124525573535760.3710171668199546970. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019108501s
	
	
	==> coredns [c5fb9f28eccc3debe1e2dd42634197f5f7016a7227dd488079a1a152f607bc05] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] 127.0.0.1:34993 - 54404 "HINFO IN 2587688579333303283.3984501632073358796. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013651334s
	
	
	==> describe nodes <==
	Name:               embed-certs-832734
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-832734
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-832734
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_33_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:33:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-832734
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:45:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:42:51 +0000   Fri, 19 Dec 2025 03:33:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:42:51 +0000   Fri, 19 Dec 2025 03:33:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:42:51 +0000   Fri, 19 Dec 2025 03:33:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:42:51 +0000   Fri, 19 Dec 2025 03:36:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.196
	  Hostname:    embed-certs-832734
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e96458273e9466aaf48ea8d012fdc6b
	  System UUID:                4e964582-73e9-466a-af48-ea8d012fdc6b
	  Boot ID:                    2ae5f3b4-7267-4819-b472-419e7f256fa9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-4csbt                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-embed-certs-832734                                  100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-embed-certs-832734                        250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-embed-certs-832734               200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-j49gn                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-embed-certs-832734                        100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-kcjq7                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        kubernetes-dashboard-api-6549569bf5-86vvf                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	  kubernetes-dashboard        kubernetes-dashboard-auth-5dd694bb47-w8bnh               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-8sndd                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m22s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-hjg9p    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-z4wvm                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m30s                  kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node embed-certs-832734 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node embed-certs-832734 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node embed-certs-832734 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node embed-certs-832734 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node embed-certs-832734 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node embed-certs-832734 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                    kubelet          Node embed-certs-832734 status is now: NodeReady
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           12m                    node-controller  Node embed-certs-832734 event: Registered Node embed-certs-832734 in Controller
	  Normal   Starting                 9m38s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m38s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m37s (x8 over 9m38s)  kubelet          Node embed-certs-832734 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m37s (x8 over 9m38s)  kubelet          Node embed-certs-832734 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m37s (x7 over 9m38s)  kubelet          Node embed-certs-832734 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 9m32s                  kubelet          Node embed-certs-832734 has been rebooted, boot id: 2ae5f3b4-7267-4819-b472-419e7f256fa9
	  Normal   RegisteredNode           9m28s                  node-controller  Node embed-certs-832734 event: Registered Node embed-certs-832734 in Controller
	
	
	==> dmesg <==
	[Dec19 03:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001667] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005181] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.720014] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088453] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.329274] kauditd_printk_skb: 133 callbacks suppressed
	[  +5.392195] kauditd_printk_skb: 140 callbacks suppressed
	[  +1.908801] kauditd_printk_skb: 255 callbacks suppressed
	[  +3.635308] kauditd_printk_skb: 59 callbacks suppressed
	[  +9.689011] kauditd_printk_skb: 177 callbacks suppressed
	[  +6.399435] kauditd_printk_skb: 27 callbacks suppressed
	[Dec19 03:37] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.835811] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [08a7af5b4c31b1181858e51d510aca3efc7b8c3c067c43ad905f888e6f55c08b] <==
	{"level":"warn","ts":"2025-12-19T03:33:27.793179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.806595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.833983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.842641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.871036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.885154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.900731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.923546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.935660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.963827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.971835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.988308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.011038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.046323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.083089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.101189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.110643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.131285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.157863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.238500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:34.374744Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"304.303603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-12-19T03:33:34.374961Z","caller":"traceutil/trace.go:172","msg":"trace[1634756299] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:317; }","duration":"304.56404ms","start":"2025-12-19T03:33:34.070381Z","end":"2025-12-19T03:33:34.374945Z","steps":["trace[1634756299] 'range keys from in-memory index tree'  (duration: 304.050866ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:34.375060Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:33:34.070366Z","time spent":"304.627597ms","remote":"127.0.0.1:37760","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"info","ts":"2025-12-19T03:33:52.712697Z","caller":"traceutil/trace.go:172","msg":"trace[668183353] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"128.941779ms","start":"2025-12-19T03:33:52.583727Z","end":"2025-12-19T03:33:52.712669Z","steps":["trace[668183353] 'process raft request'  (duration: 128.814908ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:53.812965Z","caller":"traceutil/trace.go:172","msg":"trace[2018071080] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"113.257824ms","start":"2025-12-19T03:33:53.699695Z","end":"2025-12-19T03:33:53.812953Z","steps":["trace[2018071080] 'process raft request'  (duration: 113.163652ms)"],"step_count":1}
	
	
	==> etcd [376bae94b419b9be5bfcc2679b4605fcf724678ed94fcf6a02943ed3e2d9f50b] <==
	{"level":"warn","ts":"2025-12-19T03:36:36.519751Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:36:36.214514Z","time spent":"305.227322ms","remote":"127.0.0.1:55466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":81,"response count":1,"response size":3855,"request content":"key:\"/registry/pods/kubernetes-dashboard/kubernetes-dashboard-api-6549569bf5-86vvf\" limit:1 "}
	{"level":"info","ts":"2025-12-19T03:36:46.113259Z","caller":"traceutil/trace.go:172","msg":"trace[1390186222] linearizableReadLoop","detail":"{readStateIndex:857; appliedIndex:857; }","duration":"471.433723ms","start":"2025-12-19T03:36:45.641808Z","end":"2025-12-19T03:36:46.113241Z","steps":["trace[1390186222] 'read index received'  (duration: 471.425929ms)","trace[1390186222] 'applied index is now lower than readState.Index'  (duration: 6.695µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:36:46.113364Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"471.541884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:36:46.113414Z","caller":"traceutil/trace.go:172","msg":"trace[1576369177] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:808; }","duration":"471.605033ms","start":"2025-12-19T03:36:45.641802Z","end":"2025-12-19T03:36:46.113407Z","steps":["trace[1576369177] 'agreement among raft nodes before linearized reading'  (duration: 471.508051ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:46.113437Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:36:45.641780Z","time spent":"471.65133ms","remote":"127.0.0.1:55466","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/pods\" limit:1 "}
	{"level":"info","ts":"2025-12-19T03:36:46.113770Z","caller":"traceutil/trace.go:172","msg":"trace[621120073] transaction","detail":"{read_only:false; response_revision:809; number_of_response:1; }","duration":"562.19126ms","start":"2025-12-19T03:36:45.551573Z","end":"2025-12-19T03:36:46.113764Z","steps":["trace[621120073] 'process raft request'  (duration: 562.109392ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:46.113829Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:36:45.551553Z","time spent":"562.240075ms","remote":"127.0.0.1:55608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":683,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-cnb33inqcmsn2sg3k6y5m7wx74\" mod_revision:689 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-cnb33inqcmsn2sg3k6y5m7wx74\" value_size:610 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-cnb33inqcmsn2sg3k6y5m7wx74\" > >"}
	{"level":"info","ts":"2025-12-19T03:36:46.114145Z","caller":"traceutil/trace.go:172","msg":"trace[1104665571] transaction","detail":"{read_only:false; response_revision:810; number_of_response:1; }","duration":"545.831606ms","start":"2025-12-19T03:36:45.568298Z","end":"2025-12-19T03:36:46.114130Z","steps":["trace[1104665571] 'process raft request'  (duration: 545.702568ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:46.114283Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:36:45.568273Z","time spent":"545.958676ms","remote":"127.0.0.1:55608","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":561,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/embed-certs-832734\" mod_revision:690 > success:<request_put:<key:\"/registry/leases/kube-node-lease/embed-certs-832734\" value_size:502 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/embed-certs-832734\" > >"}
	{"level":"warn","ts":"2025-12-19T03:36:47.190022Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"110.427264ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.83.196\" limit:1 ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-12-19T03:36:47.191426Z","caller":"traceutil/trace.go:172","msg":"trace[264859493] range","detail":"{range_begin:/registry/masterleases/192.168.83.196; range_end:; response_count:1; response_revision:813; }","duration":"111.840081ms","start":"2025-12-19T03:36:47.079565Z","end":"2025-12-19T03:36:47.191405Z","steps":["trace[264859493] 'range keys from in-memory index tree'  (duration: 110.258845ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:59.663129Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.706661Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.730666Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.757258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46426","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.780234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.806244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.826525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.863343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.889063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.902287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.925683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.957714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46604","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:37:11.199335Z","caller":"traceutil/trace.go:172","msg":"trace[250933048] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"216.733003ms","start":"2025-12-19T03:37:10.982585Z","end":"2025-12-19T03:37:11.199318Z","steps":["trace[250933048] 'process raft request'  (duration: 216.554031ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:37:11.205715Z","caller":"traceutil/trace.go:172","msg":"trace[158165044] transaction","detail":"{read_only:false; response_revision:871; number_of_response:1; }","duration":"216.660217ms","start":"2025-12-19T03:37:10.989038Z","end":"2025-12-19T03:37:11.205698Z","steps":["trace[158165044] 'process raft request'  (duration: 216.361597ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:45:57 up 9 min,  0 users,  load average: 0.11, 0.22, 0.18
	Linux embed-certs-832734 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ecf7299638b47a87a73b63a8a145b6d5d7a55a4ec2f83e8f2cf6517b605575ee] <==
	I1219 03:41:25.853911       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:41:25.854026       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:41:25.854200       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:41:25.855635       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:42:25.854633       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:42:25.854714       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:42:25.854735       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:42:25.855944       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:42:25.856009       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:42:25.856031       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:44:25.855592       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:44:25.855760       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:44:25.855791       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:44:25.857044       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:44:25.857232       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:44:25.857313       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fa3f43f32d05406bc540cafbb00dd00cd5324efa640039d9086a756b209638c1] <==
	I1219 03:33:32.206989       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:33:36.876344       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:33:37.138636       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:37.171349       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:37.299056       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:34:30.087473       1 conn.go:339] Error on socket receive: read tcp 192.168.83.196:8443->192.168.83.1:35574: use of closed network connection
	I1219 03:34:30.767459       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:34:30.774314       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:30.774360       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:34:30.774404       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:34:30.929449       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.107.216.145"}
	W1219 03:34:30.953894       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:30.953993       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:34:30.956165       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W1219 03:34:30.964201       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:30.964260       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [cc6fed85dd6b5fb4d8e3de856fd8ad48a3fb14ea5f01bb1a92f6abd126e0857a] <==
	I1219 03:39:31.054696       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:40:00.989789       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:40:01.066556       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:40:30.996897       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:40:31.077859       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:41:01.003378       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:41:01.088826       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:41:31.010498       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:41:31.099324       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:42:01.015440       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:42:01.109628       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:42:31.022397       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:42:31.119327       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:43:01.028695       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:43:01.130611       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:43:31.035471       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:43:31.141509       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:44:01.041536       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:44:01.153315       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:44:31.047677       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:44:31.162860       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:45:01.053810       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:45:01.173552       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:45:31.060058       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:45:31.184461       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [d9f3752c9cb6fc42c4c6a525ab0da138c84562c0cd007f6ae8c924440a454275] <==
	I1219 03:33:36.179955       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:33:36.181212       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:33:36.181282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:33:36.183728       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:33:36.184019       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:33:36.195024       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 03:33:36.197363       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:33:36.205714       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 03:33:36.206888       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 03:33:36.206970       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-832734"
	I1219 03:33:36.207007       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:33:36.213216       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:33:36.214316       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:33:36.224887       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 03:33:36.225965       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:33:36.226022       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:33:36.226714       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:33:36.228510       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:33:36.229268       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:33:36.229310       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:33:36.229328       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 03:33:36.229704       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:33:36.229741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 03:33:36.230838       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:33:36.244292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [9cb8a6c95457431429dd54a908e9cb7a9c7dd5256dcae18d99d7e3d2fb0f22b2] <==
	I1219 03:36:26.004139       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:36:26.104666       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:36:26.104724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.196"]
	E1219 03:36:26.104838       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:36:26.163444       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:36:26.163803       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:36:26.164132       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:36:26.178458       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:36:26.179601       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:36:26.179640       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:36:26.182488       1 config.go:200] "Starting service config controller"
	I1219 03:36:26.182514       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:36:26.182535       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:36:26.182538       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:36:26.182567       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:36:26.182588       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:36:26.189282       1 config.go:309] "Starting node config controller"
	I1219 03:36:26.189307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:36:26.283573       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:36:26.283603       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:36:26.283643       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:36:26.289975       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [dfe3b60326d13a2ff068327c17194ff77185eaf8fe59b42f7aa697f3ca2a4628] <==
	I1219 03:33:38.921610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:33:39.024217       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:33:39.028877       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.196"]
	E1219 03:33:39.031260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:33:39.108187       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:33:39.108271       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:33:39.108306       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:33:39.121122       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:33:39.121734       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:33:39.122150       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:33:39.131505       1 config.go:200] "Starting service config controller"
	I1219 03:33:39.131555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:33:39.131605       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:33:39.131610       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:33:39.131621       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:33:39.131624       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:33:39.132614       1 config.go:309] "Starting node config controller"
	I1219 03:33:39.132655       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:33:39.132664       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:33:39.231868       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:33:39.232067       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:33:39.232479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b1029f222f9bfc488f8a6e38154e34404bea6c9773db003212a53269860d7d0e] <==
	E1219 03:33:29.422651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:33:29.424998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:33:29.425310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:33:29.425580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:33:29.425856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:33:29.426222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:33:29.426578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:33:29.426908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:33:29.432826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:33:29.432635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:33:29.433383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:33:29.433469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:33:29.433509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:33:29.433540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:33:29.433585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:33:29.433609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:33:29.433682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:33:29.434093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:33:29.435224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 03:33:30.241084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:33:30.271931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:33:30.411828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:33:30.421022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:33:30.474859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1219 03:33:33.007891       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c4fe189224bd9d23a20f2ea005344414664cba5082381682e72e83376eda78a8] <==
	I1219 03:36:22.380222       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:36:24.819473       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:36:24.819552       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:36:24.819586       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:36:24.819600       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:36:24.900843       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:36:24.900902       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:36:24.912956       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:24.913765       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:24.916931       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:36:24.917978       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:36:25.015006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:41:20 embed-certs-832734 kubelet[1085]: E1219 03:41:20.973409    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:41:34 embed-certs-832734 kubelet[1085]: E1219 03:41:34.974284    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:41:49 embed-certs-832734 kubelet[1085]: E1219 03:41:49.974862    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:42:02 embed-certs-832734 kubelet[1085]: E1219 03:42:02.973106    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:42:14 embed-certs-832734 kubelet[1085]: E1219 03:42:14.974751    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:42:29 embed-certs-832734 kubelet[1085]: E1219 03:42:29.984300    1085 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:42:29 embed-certs-832734 kubelet[1085]: E1219 03:42:29.984383    1085 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:42:29 embed-certs-832734 kubelet[1085]: E1219 03:42:29.984484    1085 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-kcjq7_kube-system(3df93f50-47ae-4697-9567-9a02426c3a6c): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 19 03:42:29 embed-certs-832734 kubelet[1085]: E1219 03:42:29.984529    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:42:40 embed-certs-832734 kubelet[1085]: E1219 03:42:40.975126    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:42:51 embed-certs-832734 kubelet[1085]: E1219 03:42:51.974981    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:43:03 embed-certs-832734 kubelet[1085]: E1219 03:43:03.974502    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:43:17 embed-certs-832734 kubelet[1085]: E1219 03:43:17.974377    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:43:31 embed-certs-832734 kubelet[1085]: E1219 03:43:31.976796    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:43:46 embed-certs-832734 kubelet[1085]: E1219 03:43:46.973942    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:43:59 embed-certs-832734 kubelet[1085]: E1219 03:43:59.973812    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:44:10 embed-certs-832734 kubelet[1085]: E1219 03:44:10.973959    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:44:25 embed-certs-832734 kubelet[1085]: E1219 03:44:25.973958    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:44:37 embed-certs-832734 kubelet[1085]: E1219 03:44:37.974961    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:44:48 embed-certs-832734 kubelet[1085]: E1219 03:44:48.973421    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:45:01 embed-certs-832734 kubelet[1085]: E1219 03:45:01.974113    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:45:14 embed-certs-832734 kubelet[1085]: E1219 03:45:14.973583    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:45:26 embed-certs-832734 kubelet[1085]: E1219 03:45:26.974592    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:45:40 embed-certs-832734 kubelet[1085]: E1219 03:45:40.973562    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:45:52 embed-certs-832734 kubelet[1085]: E1219 03:45:52.973778    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	
	
	==> kubernetes-dashboard [4a8a4c08101504a74628dbbae888596631f3647072434cbfae4baf25f7a85130] <==
	I1219 03:37:00.657731       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:37:00.657843       1 init.go:49] Using in-cluster config
	I1219 03:37:00.658266       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [633ebe42c481f61adb312abdccb0ac35f4bc0f5b69e714c39223515087903512] <==
	I1219 03:36:53.462967       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:36:53.464557       1 init.go:48] Using in-cluster config
	I1219 03:36:53.466753       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [a8e8ce1f347b77e7ad8788a561119d579e0b72751ead05097ce5a0e60cbed4ca] <==
	I1219 03:36:57.162259       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:36:57.162396       1 init.go:49] Using in-cluster config
	I1219 03:36:57.162866       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:36:57.162911       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:36:57.162921       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:36:57.162930       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:36:57.251593       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:36:57.251676       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:36:57.266979       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:36:57.267554       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:37:27.274131       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [e1e5b294ce0f74f6a3ec3a5cdde7b2d1ba619235fbf1d30702b061acf1d8ba8e] <==
	E1219 03:44:04.396137       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:45:04.391508       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	10.244.0.1 - - [19/Dec/2025:03:43:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:43:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:43:27 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:43:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:43:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:43:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:43:57 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:27 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:57 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:45:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:27 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:45:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:57 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	
	
	==> storage-provisioner [42e0e0df29296b9adf1cb69856162d4fe721dd68ba40736b43c6c25859de7cb4] <==
	I1219 03:36:25.766118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:36:55.796277       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a22c8439567b6522d327858bdb7e780937b3476aba6b99efe142d2e68f041b48] <==
	W1219 03:45:33.689768       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:35.693243       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:35.703866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:37.709148       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:37.718488       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:39.722754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:39.729038       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:41.732353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:41.738674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:43.742880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:43.751787       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:45.755477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:45.761558       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:47.765140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:47.775622       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:49.779433       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:49.784509       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:51.789491       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:51.796553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:53.801362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:53.811711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:55.816505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:55.823452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:57.831119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:45:57.841811       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832734 -n embed-certs-832734
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-832734 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-kcjq7
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-832734 describe pod metrics-server-746fcd58dc-kcjq7
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-832734 describe pod metrics-server-746fcd58dc-kcjq7: exit status 1 (71.500745ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-kcjq7" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-832734 describe pod metrics-server-746fcd58dc-kcjq7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (542.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:37:33.807323    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:38.308477    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:41.548102    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:49.828340    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:57.748556    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:57.753880    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:57.764201    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:57.784483    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:57.824765    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:57.905140    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:58.065618    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:58.386365    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:59.027125    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:37:59.908259    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:00.307665    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:02.867825    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:05.011553    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:07.988240    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:18.229454    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:22.509156    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:30.789193    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:38:38.710071    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:39:19.670493    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:39:21.829447    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:39:44.430204    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:39:44.846663    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:39:49.963627    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:39:52.710128    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:39:54.464373    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:40:17.648100    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:40:21.169500    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:40:22.149687    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:40:34.379981    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:40:41.591559    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:40:48.851880    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:41:07.899141    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:41:37.985930    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:41:39.041858    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:42:00.584672    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:42:05.669781    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:42:08.866818    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:42:28.271335    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:42:36.550759    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:42:57.748567    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:43:25.432052    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:44:44.846804    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:44:49.963312    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:44:54.463965    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
start_stop_delete_test.go:272: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:46:31.338348935 +0000 UTC m=+4878.152977755
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-382606 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-382606 logs -n 25: (1.734452431s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────
───────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────
───────────┤
	│ ssh     │ -p bridge-694633 sudo cat /etc/containerd/config.toml                                                                                                                                                                                             │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo containerd config dump                                                                                                                                                                                                      │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │                     │
	│ ssh     │ -p bridge-694633 sudo systemctl cat crio --no-pager                                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                     │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo crio config                                                                                                                                                                                                                 │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p bridge-694633                                                                                                                                                                                                                                  │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p disable-driver-mounts-477416                                                                                                                                                                                                                   │ disable-driver-mounts-477416 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ stop    │ -p old-k8s-version-638861 --alsologtostderr -v=3                                                                                                                                                                                                  │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-728806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p no-preload-728806 --alsologtostderr -v=3                                                                                                                                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p embed-certs-832734 --alsologtostderr -v=3                                                                                                                                                                                                      │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ stop    │ -p default-k8s-diff-port-382606 --alsologtostderr -v=3                                                                                                                                                                                            │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────
───────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:36:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:36:29.621083   51711 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:36:29.621200   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621205   51711 out.go:374] Setting ErrFile to fd 2...
	I1219 03:36:29.621212   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621491   51711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:36:29.622131   51711 out.go:368] Setting JSON to false
	I1219 03:36:29.623408   51711 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4729,"bootTime":1766110661,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:36:29.623486   51711 start.go:143] virtualization: kvm guest
	I1219 03:36:29.625670   51711 out.go:179] * [default-k8s-diff-port-382606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:36:29.633365   51711 notify.go:221] Checking for updates...
	I1219 03:36:29.633417   51711 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:36:29.635075   51711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:36:29.636942   51711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:29.638374   51711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:36:29.639842   51711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:36:29.641026   51711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:36:29.642747   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:29.643478   51711 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:36:29.700163   51711 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:36:29.701162   51711 start.go:309] selected driver: kvm2
	I1219 03:36:29.701180   51711 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.701323   51711 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:36:29.702837   51711 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:29.702885   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:29.702957   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:29.703020   51711 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.703150   51711 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:36:29.704494   51711 out.go:179] * Starting "default-k8s-diff-port-382606" primary control-plane node in "default-k8s-diff-port-382606" cluster
	I1219 03:36:29.705691   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:29.705751   51711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:36:29.705771   51711 cache.go:65] Caching tarball of preloaded images
	I1219 03:36:29.705892   51711 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:36:29.705927   51711 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:36:29.706078   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:29.706318   51711 start.go:360] acquireMachinesLock for default-k8s-diff-port-382606: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:36:29.706374   51711 start.go:364] duration metric: took 32.309µs to acquireMachinesLock for "default-k8s-diff-port-382606"
	I1219 03:36:29.706388   51711 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:36:29.706395   51711 fix.go:54] fixHost starting: 
	I1219 03:36:29.708913   51711 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382606: state=Stopped err=<nil>
	W1219 03:36:29.708943   51711 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:36:27.974088   51386 addons.go:239] Setting addon default-storageclass=true in "embed-certs-832734"
	W1219 03:36:27.974109   51386 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:36:27.974136   51386 host.go:66] Checking if "embed-certs-832734" exists ...
	I1219 03:36:27.974565   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:36:27.974582   51386 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:36:27.974599   51386 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:27.974608   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:36:27.976663   51386 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:27.976691   51386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:36:27.976771   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.977846   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.977880   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.978136   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.979376   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979747   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979820   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.979860   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980122   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.980448   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.980482   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980686   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.981056   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981521   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.981545   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981792   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:28.331935   51386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:28.393904   51386 node_ready.go:35] waiting up to 6m0s for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398272   51386 node_ready.go:49] node "embed-certs-832734" is "Ready"
	I1219 03:36:28.398297   51386 node_ready.go:38] duration metric: took 4.336343ms for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398310   51386 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:28.398457   51386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:28.475709   51386 api_server.go:72] duration metric: took 507.310055ms to wait for apiserver process to appear ...
	I1219 03:36:28.475751   51386 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:28.475776   51386 api_server.go:253] Checking apiserver healthz at https://192.168.83.196:8443/healthz ...
	I1219 03:36:28.483874   51386 api_server.go:279] https://192.168.83.196:8443/healthz returned 200:
	ok
	I1219 03:36:28.485710   51386 api_server.go:141] control plane version: v1.34.3
	I1219 03:36:28.485738   51386 api_server.go:131] duration metric: took 9.978141ms to wait for apiserver health ...
	I1219 03:36:28.485751   51386 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:36:28.493956   51386 system_pods.go:59] 8 kube-system pods found
	I1219 03:36:28.493996   51386 system_pods.go:61] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.494024   51386 system_pods.go:61] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.494037   51386 system_pods.go:61] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.494044   51386 system_pods.go:61] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.494052   51386 system_pods.go:61] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.494058   51386 system_pods.go:61] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.494064   51386 system_pods.go:61] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.494074   51386 system_pods.go:61] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.494080   51386 system_pods.go:74] duration metric: took 8.32329ms to wait for pod list to return data ...
	I1219 03:36:28.494090   51386 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:36:28.500269   51386 default_sa.go:45] found service account: "default"
	I1219 03:36:28.500298   51386 default_sa.go:55] duration metric: took 6.200379ms for default service account to be created ...
	I1219 03:36:28.500309   51386 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:36:28.601843   51386 system_pods.go:86] 8 kube-system pods found
	I1219 03:36:28.601871   51386 system_pods.go:89] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.601880   51386 system_pods.go:89] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.601887   51386 system_pods.go:89] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.601892   51386 system_pods.go:89] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.601896   51386 system_pods.go:89] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.601902   51386 system_pods.go:89] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.601921   51386 system_pods.go:89] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.601930   51386 system_pods.go:89] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.601938   51386 system_pods.go:126] duration metric: took 101.621956ms to wait for k8s-apps to be running ...
	I1219 03:36:28.601947   51386 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:36:28.602031   51386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:36:28.618616   51386 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:36:28.685146   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:36:28.685175   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:36:28.694410   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:28.696954   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:28.726390   51386 system_svc.go:56] duration metric: took 124.434217ms WaitForService to wait for kubelet
	I1219 03:36:28.726426   51386 kubeadm.go:587] duration metric: took 758.032732ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:28.726450   51386 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:36:28.726520   51386 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:36:28.739364   51386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:36:28.739393   51386 node_conditions.go:123] node cpu capacity is 2
	I1219 03:36:28.739407   51386 node_conditions.go:105] duration metric: took 12.951551ms to run NodePressure ...
	I1219 03:36:28.739421   51386 start.go:242] waiting for startup goroutines ...
	I1219 03:36:28.774949   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:36:28.774981   51386 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:36:28.896758   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:28.896785   51386 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:36:29.110522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:31.016418   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.319423876s)
	I1219 03:36:31.016497   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.322025841s)
	I1219 03:36:31.016534   51386 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.28998192s)
	I1219 03:36:31.016597   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.906047637s)
	I1219 03:36:31.016610   51386 addons.go:500] Verifying addon metrics-server=true in "embed-certs-832734"
	I1219 03:36:31.016613   51386 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:36:29.711054   51711 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-382606" ...
	I1219 03:36:29.711101   51711 main.go:144] libmachine: starting domain...
	I1219 03:36:29.711116   51711 main.go:144] libmachine: ensuring networks are active...
	I1219 03:36:29.712088   51711 main.go:144] libmachine: Ensuring network default is active
	I1219 03:36:29.712549   51711 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-382606 is active
	I1219 03:36:29.713312   51711 main.go:144] libmachine: getting domain XML...
	I1219 03:36:29.714943   51711 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-382606</name>
	  <uuid>342506c1-9e12-4922-9438-23d9d57eea28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/default-k8s-diff-port-382606.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:fb:a4:4e'/>
	      <source network='mk-default-k8s-diff-port-382606'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:57:4f:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:36:31.342655   51711 main.go:144] libmachine: waiting for domain to start...
	I1219 03:36:31.345734   51711 main.go:144] libmachine: domain is now running
	I1219 03:36:31.345778   51711 main.go:144] libmachine: waiting for IP...
	I1219 03:36:31.347227   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348141   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has current primary IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348163   51711 main.go:144] libmachine: found domain IP: 192.168.72.129
	I1219 03:36:31.348170   51711 main.go:144] libmachine: reserving static IP address...
	I1219 03:36:31.348677   51711 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.348704   51711 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-382606 - found existing host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"}
	I1219 03:36:31.348713   51711 main.go:144] libmachine: reserved static IP address 192.168.72.129 for domain default-k8s-diff-port-382606
	I1219 03:36:31.348731   51711 main.go:144] libmachine: waiting for SSH...
	I1219 03:36:31.348741   51711 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:36:31.351582   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352122   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.352155   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352422   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:31.352772   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:31.352782   51711 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:36:34.417281   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:31.980522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:36:35.707529   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.726958549s)
	I1219 03:36:35.707614   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:36:36.641432   51386 addons.go:500] Verifying addon dashboard=true in "embed-certs-832734"
	I1219 03:36:36.645285   51386 out.go:179] * Verifying dashboard addon...
	I1219 03:36:36.647847   51386 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:36:36.659465   51386 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:36:36.659491   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.154819   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.652042   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.152461   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.651730   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.152475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.652155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.153311   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.652427   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:41.151837   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.497282   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:43.498703   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: connection refused
	I1219 03:36:41.654155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.154727   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.653186   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.152647   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.651177   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.154241   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.651752   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:45.152244   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.124796   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.151832   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.628602   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:46.632304   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.632730   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.632753   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.633056   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:46.633240   51711 machine.go:94] provisionDockerMachine start ...
	I1219 03:36:46.635441   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.635889   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.635934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.636109   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.636298   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.636308   51711 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:36:46.752911   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:36:46.752937   51711 buildroot.go:166] provisioning hostname "default-k8s-diff-port-382606"
	I1219 03:36:46.756912   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757425   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.757463   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757703   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.757935   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.757955   51711 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382606 && echo "default-k8s-diff-port-382606" | sudo tee /etc/hostname
	I1219 03:36:46.902266   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382606
	
	I1219 03:36:46.905791   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906293   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.906323   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906555   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.906758   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.906774   51711 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382606/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:36:47.045442   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:47.045472   51711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5003/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5003/.minikube}
	I1219 03:36:47.045496   51711 buildroot.go:174] setting up certificates
	I1219 03:36:47.045505   51711 provision.go:84] configureAuth start
	I1219 03:36:47.049643   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.050087   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.050115   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.052980   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053377   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.053417   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053596   51711 provision.go:143] copyHostCerts
	I1219 03:36:47.053653   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem, removing ...
	I1219 03:36:47.053678   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem
	I1219 03:36:47.053772   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem (1082 bytes)
	I1219 03:36:47.053902   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem, removing ...
	I1219 03:36:47.053919   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem
	I1219 03:36:47.053949   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem (1123 bytes)
	I1219 03:36:47.054027   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem, removing ...
	I1219 03:36:47.054036   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem
	I1219 03:36:47.054059   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem (1675 bytes)
	I1219 03:36:47.054113   51711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382606 san=[127.0.0.1 192.168.72.129 default-k8s-diff-port-382606 localhost minikube]
	I1219 03:36:47.093786   51711 provision.go:177] copyRemoteCerts
	I1219 03:36:47.093848   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:36:47.096938   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097402   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.097443   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097608   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.187589   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:36:47.229519   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:36:47.264503   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:36:47.294746   51711 provision.go:87] duration metric: took 249.22829ms to configureAuth
	I1219 03:36:47.294772   51711 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:36:47.294974   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:47.294990   51711 machine.go:97] duration metric: took 661.738495ms to provisionDockerMachine
	I1219 03:36:47.295000   51711 start.go:293] postStartSetup for "default-k8s-diff-port-382606" (driver="kvm2")
	I1219 03:36:47.295020   51711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:36:47.295079   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:36:47.297915   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298388   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.298414   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298592   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.391351   51711 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:36:47.396636   51711 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:36:47.396664   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/addons for local assets ...
	I1219 03:36:47.396734   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/files for local assets ...
	I1219 03:36:47.396833   51711 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem -> 89782.pem in /etc/ssl/certs
	I1219 03:36:47.396981   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:36:47.414891   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:47.450785   51711 start.go:296] duration metric: took 155.770681ms for postStartSetup
	I1219 03:36:47.450829   51711 fix.go:56] duration metric: took 17.744433576s for fixHost
	I1219 03:36:47.453927   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454408   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.454438   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454581   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:47.454774   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:47.454784   51711 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:36:47.578960   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766115407.541226750
	
	I1219 03:36:47.578984   51711 fix.go:216] guest clock: 1766115407.541226750
	I1219 03:36:47.578993   51711 fix.go:229] Guest: 2025-12-19 03:36:47.54122675 +0000 UTC Remote: 2025-12-19 03:36:47.450834556 +0000 UTC m=+17.907032910 (delta=90.392194ms)
	I1219 03:36:47.579033   51711 fix.go:200] guest clock delta is within tolerance: 90.392194ms
	I1219 03:36:47.579039   51711 start.go:83] releasing machines lock for "default-k8s-diff-port-382606", held for 17.872657006s
	I1219 03:36:47.582214   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.582699   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.582737   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.583361   51711 ssh_runner.go:195] Run: cat /version.json
	I1219 03:36:47.583439   51711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:36:47.586735   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.586965   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587209   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587236   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587400   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.587637   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587663   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587852   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.701374   51711 ssh_runner.go:195] Run: systemctl --version
	I1219 03:36:47.707956   51711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:36:47.714921   51711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:36:47.714993   51711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:36:47.736464   51711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:36:47.736487   51711 start.go:496] detecting cgroup driver to use...
	I1219 03:36:47.736550   51711 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:36:47.771913   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:36:47.789225   51711 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:36:47.789292   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:36:47.814503   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:36:47.832961   51711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:36:48.004075   51711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:36:48.227207   51711 docker.go:234] disabling docker service ...
	I1219 03:36:48.227297   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:36:48.245923   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:36:48.261992   51711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:36:48.443743   51711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:36:48.627983   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:36:48.647391   51711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:36:48.673139   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:36:48.690643   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:36:48.703896   51711 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 03:36:48.703949   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 03:36:48.718567   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.732932   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:36:48.749170   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.772676   51711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:36:48.787125   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:36:48.800190   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:36:48.812900   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:36:48.826147   51711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:36:48.841046   51711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:36:48.841107   51711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:36:48.867440   51711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:36:48.879351   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:49.048166   51711 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:36:49.092003   51711 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:36:49.092122   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:49.098374   51711 retry.go:31] will retry after 1.402478088s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1219 03:36:50.501086   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:50.509026   51711 start.go:564] Will wait 60s for crictl version
	I1219 03:36:50.509089   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:50.514426   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:36:50.554888   51711 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:36:50.554956   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.583326   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.611254   51711 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:36:46.651075   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.206126   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.654221   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.152458   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.651475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.152863   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.655859   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.152073   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.655613   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.153352   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.653895   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.151537   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.653336   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.156131   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.652752   51386 kapi.go:107] duration metric: took 17.00490252s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:36:53.654689   51386 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-832734 addons enable metrics-server
	
	I1219 03:36:53.656077   51386 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1219 03:36:50.615098   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615498   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:50.615532   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615798   51711 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1219 03:36:50.620834   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.637469   51711 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:36:50.637614   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:50.637684   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.668556   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.668578   51711 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:36:50.668632   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.703466   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.703488   51711 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:36:50.703495   51711 kubeadm.go:935] updating node { 192.168.72.129 8444 v1.34.3 containerd true true} ...
	I1219 03:36:50.703585   51711 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-382606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:36:50.703648   51711 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:36:50.734238   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:50.734260   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:50.734277   51711 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:36:50.734306   51711 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382606 NodeName:default-k8s-diff-port-382606 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:36:50.734471   51711 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-382606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.129"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:36:50.734558   51711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:36:50.746945   51711 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:36:50.746995   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:36:50.758948   51711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1219 03:36:50.782923   51711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:36:50.807164   51711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1219 03:36:50.829562   51711 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I1219 03:36:50.833888   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.849703   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:51.014216   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:51.062118   51711 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606 for IP: 192.168.72.129
	I1219 03:36:51.062147   51711 certs.go:195] generating shared ca certs ...
	I1219 03:36:51.062168   51711 certs.go:227] acquiring lock for ca certs: {Name:mk6db7e23547b9013e447eaa0ddba18e05213211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.062409   51711 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key
	I1219 03:36:51.062517   51711 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key
	I1219 03:36:51.062542   51711 certs.go:257] generating profile certs ...
	I1219 03:36:51.062681   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/client.key
	I1219 03:36:51.062791   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key.13c41c2b
	I1219 03:36:51.062855   51711 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key
	I1219 03:36:51.063062   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem (1338 bytes)
	W1219 03:36:51.063113   51711 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978_empty.pem, impossibly tiny 0 bytes
	I1219 03:36:51.063130   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:36:51.063176   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:36:51.063218   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:36:51.063256   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem (1675 bytes)
	I1219 03:36:51.063324   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:51.064049   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:36:51.108621   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:36:51.164027   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:36:51.199337   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:36:51.234216   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:36:51.283158   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:36:51.314148   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:36:51.344498   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:36:51.374002   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:36:51.403858   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem --> /usr/share/ca-certificates/8978.pem (1338 bytes)
	I1219 03:36:51.438346   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /usr/share/ca-certificates/89782.pem (1708 bytes)
	I1219 03:36:51.476174   51711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:36:51.499199   51711 ssh_runner.go:195] Run: openssl version
	I1219 03:36:51.506702   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.518665   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8978.pem /etc/ssl/certs/8978.pem
	I1219 03:36:51.530739   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536107   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:37 /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536167   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.543417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:36:51.554750   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8978.pem /etc/ssl/certs/51391683.0
	I1219 03:36:51.566106   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.577342   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89782.pem /etc/ssl/certs/89782.pem
	I1219 03:36:51.588583   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594342   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:37 /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594386   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.602417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.614493   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89782.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.626108   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.638273   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:36:51.650073   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655546   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655600   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.662728   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:36:51.675457   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:36:51.687999   51711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:36:51.693178   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:36:51.700656   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:36:51.708623   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:36:51.715865   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:36:51.725468   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:36:51.732847   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:36:51.739988   51711 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:51.740068   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:36:51.740145   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.779756   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.779780   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.779786   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.779790   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.779794   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.779800   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.779804   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.779808   51711 cri.go:92] found id: ""
	I1219 03:36:51.779864   51711 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1219 03:36:51.796814   51711 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:36:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1219 03:36:51.796914   51711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:36:51.809895   51711 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:36:51.809912   51711 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:36:51.809956   51711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:36:51.821465   51711 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:36:51.822684   51711 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382606" does not appear in /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:51.823576   51711 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5003/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382606" cluster setting kubeconfig missing "default-k8s-diff-port-382606" context setting]
	I1219 03:36:51.824679   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.826925   51711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:36:51.838686   51711 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.129
	I1219 03:36:51.838723   51711 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:36:51.838740   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1219 03:36:51.838793   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.874959   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.874981   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.874995   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.874998   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.875001   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.875004   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.875019   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.875022   51711 cri.go:92] found id: ""
	I1219 03:36:51.875027   51711 cri.go:255] Stopping containers: [64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c]
	I1219 03:36:51.875080   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:51.879700   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c
	I1219 03:36:51.939513   51711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:36:51.985557   51711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:36:51.999714   51711 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:36:51.999739   51711 kubeadm.go:158] found existing configuration files:
	
	I1219 03:36:51.999807   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:36:52.011529   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:36:52.011594   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:36:52.023630   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:36:52.036507   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:36:52.036566   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:36:52.048019   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.061421   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:36:52.061498   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.073436   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:36:52.084186   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:36:52.084244   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:36:52.098426   51711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:36:52.111056   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:52.261515   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.323343   51711 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.061779829s)
	I1219 03:36:54.323428   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.593075   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:53.657242   51386 addons.go:546] duration metric: took 25.688774629s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1219 03:36:53.657289   51386 start.go:247] waiting for cluster config update ...
	I1219 03:36:53.657306   51386 start.go:256] writing updated cluster config ...
	I1219 03:36:53.657575   51386 ssh_runner.go:195] Run: rm -f paused
	I1219 03:36:53.663463   51386 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:53.667135   51386 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.672738   51386 pod_ready.go:94] pod "coredns-66bc5c9577-4csbt" is "Ready"
	I1219 03:36:53.672765   51386 pod_ready.go:86] duration metric: took 5.607283ms for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.675345   51386 pod_ready.go:83] waiting for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.679709   51386 pod_ready.go:94] pod "etcd-embed-certs-832734" is "Ready"
	I1219 03:36:53.679732   51386 pod_ready.go:86] duration metric: took 4.36675ms for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.681513   51386 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.685784   51386 pod_ready.go:94] pod "kube-apiserver-embed-certs-832734" is "Ready"
	I1219 03:36:53.685803   51386 pod_ready.go:86] duration metric: took 4.273628ms for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.688112   51386 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.068844   51386 pod_ready.go:94] pod "kube-controller-manager-embed-certs-832734" is "Ready"
	I1219 03:36:54.068878   51386 pod_ready.go:86] duration metric: took 380.74628ms for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.268799   51386 pod_ready.go:83] waiting for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.668935   51386 pod_ready.go:94] pod "kube-proxy-j49gn" is "Ready"
	I1219 03:36:54.668971   51386 pod_ready.go:86] duration metric: took 400.137967ms for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.868862   51386 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269481   51386 pod_ready.go:94] pod "kube-scheduler-embed-certs-832734" is "Ready"
	I1219 03:36:55.269512   51386 pod_ready.go:86] duration metric: took 400.62266ms for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269530   51386 pod_ready.go:40] duration metric: took 1.60604049s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:55.329865   51386 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:36:55.331217   51386 out.go:179] * Done! kubectl is now configured to use "embed-certs-832734" cluster and "default" namespace by default
	I1219 03:36:54.658040   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.764830   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:54.764901   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.265628   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.765546   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.265137   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.294858   51711 api_server.go:72] duration metric: took 1.53003596s to wait for apiserver process to appear ...
	I1219 03:36:56.294894   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:56.294920   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:56.295516   51711 api_server.go:269] stopped: https://192.168.72.129:8444/healthz: Get "https://192.168.72.129:8444/healthz": dial tcp 192.168.72.129:8444: connect: connection refused
	I1219 03:36:56.795253   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.818365   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.818396   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:36:59.818426   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.867609   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.867642   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:37:00.295133   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.300691   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.300720   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:00.795111   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.825034   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.825068   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.295554   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.307047   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.307078   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.795401   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.800055   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.800091   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.295888   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.302103   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.302125   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.795818   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.802296   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.802326   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:03.296021   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:03.301661   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:03.310379   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:03.310412   51711 api_server.go:131] duration metric: took 7.01550899s to wait for apiserver health ...
	I1219 03:37:03.310425   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:37:03.310437   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:37:03.312477   51711 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:37:03.313819   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:37:03.331177   51711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:37:03.360466   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:03.365800   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:03.365852   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:37:03.365866   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:03.365876   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:03.365889   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:37:03.365896   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:03.365910   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:03.365918   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:03.365924   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:03.365935   51711 system_pods.go:74] duration metric: took 5.441032ms to wait for pod list to return data ...
	I1219 03:37:03.365944   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:03.369512   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:03.369539   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:03.369553   51711 node_conditions.go:105] duration metric: took 3.601059ms to run NodePressure ...
	I1219 03:37:03.369618   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:37:03.647329   51711 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651092   51711 kubeadm.go:744] kubelet initialised
	I1219 03:37:03.651116   51711 kubeadm.go:745] duration metric: took 3.75629ms waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651137   51711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:37:03.667607   51711 ops.go:34] apiserver oom_adj: -16
	I1219 03:37:03.667629   51711 kubeadm.go:602] duration metric: took 11.857709737s to restartPrimaryControlPlane
	I1219 03:37:03.667638   51711 kubeadm.go:403] duration metric: took 11.927656699s to StartCluster
	I1219 03:37:03.667662   51711 settings.go:142] acquiring lock: {Name:mk7f7ba85357bfc9fca2e66b70b16d967ca355d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.667744   51711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:37:03.669684   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.669943   51711 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:37:03.670026   51711 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:37:03.670125   51711 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670141   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:37:03.670153   51711 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670165   51711 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670174   51711 addons.go:248] addon metrics-server should already be in state true
	I1219 03:37:03.670145   51711 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382606"
	I1219 03:37:03.670175   51711 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670219   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.670222   51711 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382606"
	I1219 03:37:03.670185   51711 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670315   51711 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670328   51711 addons.go:248] addon dashboard should already be in state true
	I1219 03:37:03.670352   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	W1219 03:37:03.670200   51711 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:37:03.670428   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.671212   51711 out.go:179] * Verifying Kubernetes components...
	I1219 03:37:03.672712   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:37:03.673624   51711 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:03.673642   51711 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:37:03.674241   51711 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:37:03.674256   51711 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:37:03.674842   51711 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.674857   51711 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:37:03.674871   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.675431   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:37:03.675448   51711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:37:03.675481   51711 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:03.675502   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:37:03.677064   51711 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:03.677081   51711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:37:03.677620   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678481   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.678567   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678872   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.680203   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680419   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680904   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.680934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681162   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681407   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681444   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681467   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681685   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681950   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681982   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.682175   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.929043   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:37:03.969693   51711 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:04.174684   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:04.182529   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:04.184635   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:37:04.184660   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:37:04.197532   51711 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:37:04.242429   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:37:04.242455   51711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:37:04.309574   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:04.309600   51711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:37:04.367754   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:05.660040   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.485300577s)
	I1219 03:37:05.660070   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.477513606s)
	I1219 03:37:05.660116   51711 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.462552784s)
	I1219 03:37:05.660185   51711 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:37:05.673056   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.305263658s)
	I1219 03:37:05.673098   51711 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-382606"
	I1219 03:37:05.673137   51711 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	W1219 03:37:05.974619   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:06.630759   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	W1219 03:37:08.472974   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:10.195765   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.56493028s)
	I1219 03:37:10.195868   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:10.536948   51711 node_ready.go:49] node "default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:10.536984   51711 node_ready.go:38] duration metric: took 6.567254454s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:10.536999   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:37:10.537074   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:37:10.631962   51711 api_server.go:72] duration metric: took 6.961979571s to wait for apiserver process to appear ...
	I1219 03:37:10.631998   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:37:10.632041   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:10.633102   51711 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-382606"
	I1219 03:37:10.637827   51711 out.go:179] * Verifying dashboard addon...
	I1219 03:37:10.641108   51711 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:37:10.648897   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:10.650072   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:10.650099   51711 api_server.go:131] duration metric: took 18.093601ms to wait for apiserver health ...
	I1219 03:37:10.650110   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:10.655610   51711 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:37:10.655627   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:10.657971   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:10.657998   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.658023   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.658033   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.658042   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.658048   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.658055   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.658064   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.658069   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.658080   51711 system_pods.go:74] duration metric: took 7.963499ms to wait for pod list to return data ...
	I1219 03:37:10.658089   51711 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:37:10.668090   51711 default_sa.go:45] found service account: "default"
	I1219 03:37:10.668118   51711 default_sa.go:55] duration metric: took 10.020956ms for default service account to be created ...
	I1219 03:37:10.668130   51711 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:37:10.680469   51711 system_pods.go:86] 8 kube-system pods found
	I1219 03:37:10.680493   51711 system_pods.go:89] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.680507   51711 system_pods.go:89] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.680513   51711 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.680520   51711 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.680525   51711 system_pods.go:89] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.680532   51711 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.680540   51711 system_pods.go:89] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.680555   51711 system_pods.go:89] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.680567   51711 system_pods.go:126] duration metric: took 12.428884ms to wait for k8s-apps to be running ...
	I1219 03:37:10.680577   51711 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:37:10.680634   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:37:10.723844   51711 system_svc.go:56] duration metric: took 43.258925ms WaitForService to wait for kubelet
	I1219 03:37:10.723871   51711 kubeadm.go:587] duration metric: took 7.05389644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:37:10.723887   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:10.731598   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:10.731620   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:10.731629   51711 node_conditions.go:105] duration metric: took 7.738835ms to run NodePressure ...
	I1219 03:37:10.731640   51711 start.go:242] waiting for startup goroutines ...
	I1219 03:37:11.145699   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:11.645111   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.144952   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.644987   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.151074   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.645695   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.146399   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.645725   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.146044   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.645372   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.145700   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.645126   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.145189   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.645089   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.151071   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.645879   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.145525   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.645572   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.144405   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.647145   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.145368   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.653732   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.146443   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.645800   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.145131   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.644929   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.145023   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.646072   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.145868   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.647994   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.147617   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.648227   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.149067   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.645432   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.145986   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.645392   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:29.149926   51711 kapi.go:107] duration metric: took 18.508817791s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:37:29.152664   51711 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382606 addons enable metrics-server
	
	I1219 03:37:29.153867   51711 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:37:29.155085   51711 addons.go:546] duration metric: took 25.485078365s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:37:29.155131   51711 start.go:247] waiting for cluster config update ...
	I1219 03:37:29.155147   51711 start.go:256] writing updated cluster config ...
	I1219 03:37:29.156022   51711 ssh_runner.go:195] Run: rm -f paused
	I1219 03:37:29.170244   51711 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:29.178962   51711 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.186205   51711 pod_ready.go:94] pod "coredns-66bc5c9577-bzq6s" is "Ready"
	I1219 03:37:29.186234   51711 pod_ready.go:86] duration metric: took 7.24885ms for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.280615   51711 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.286426   51711 pod_ready.go:94] pod "etcd-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.286446   51711 pod_ready.go:86] duration metric: took 5.805885ms for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.288885   51711 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.293769   51711 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.293787   51711 pod_ready.go:86] duration metric: took 4.884445ms for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.296432   51711 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.576349   51711 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.576388   51711 pod_ready.go:86] duration metric: took 279.933458ms for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.777084   51711 pod_ready.go:83] waiting for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.176016   51711 pod_ready.go:94] pod "kube-proxy-vhml9" is "Ready"
	I1219 03:37:30.176047   51711 pod_ready.go:86] duration metric: took 398.930848ms for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.377206   51711 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776837   51711 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:30.776861   51711 pod_ready.go:86] duration metric: took 399.600189ms for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776872   51711 pod_ready.go:40] duration metric: took 1.606601039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:30.827211   51711 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:37:30.828493   51711 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-382606" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	8858312cf5133       6e38f40d628db       8 minutes ago       Running             storage-provisioner                    2                   2a7b73727c50b       storage-provisioner                                     kube-system
	00cd7ad611cf4       3a975970da2f5       8 minutes ago       Running             proxy                                  0                   aec924010950e       kubernetes-dashboard-kong-9849c64bd-wgdnx               kubernetes-dashboard
	7f4bc72ab8030       3a975970da2f5       8 minutes ago       Exited              clear-stale-pid                        0                   aec924010950e       kubernetes-dashboard-kong-9849c64bd-wgdnx               kubernetes-dashboard
	0d9d949e94e6f       59f642f485d26       9 minutes ago       Running             kubernetes-dashboard-web               0                   20f11d2bbf1a0       kubernetes-dashboard-web-5c9f966b98-wwbc2               kubernetes-dashboard
	503098129741b       a0607af4fcd8a       9 minutes ago       Running             kubernetes-dashboard-api               0                   a238ed45df354       kubernetes-dashboard-api-5444544855-rgb27               kubernetes-dashboard
	4b740253b2e42       d9cbc9f4053ca       9 minutes ago       Running             kubernetes-dashboard-metrics-scraper   0                   c8401c8fc52a8       kubernetes-dashboard-metrics-scraper-7685fd8b77-kfx97   kubernetes-dashboard
	a0b5497708d9c       dd54374d0ab14       9 minutes ago       Running             kubernetes-dashboard-auth              0                   3b5b7035cc28c       kubernetes-dashboard-auth-75d54f6f86-bnd95              kubernetes-dashboard
	e11bfb213730c       56cc512116c8f       9 minutes ago       Running             busybox                                1                   f3a8800713fd9       busybox                                                 default
	89d545dd0db17       52546a367cc9e       9 minutes ago       Running             coredns                                1                   c4b48d81be80a       coredns-66bc5c9577-bzq6s                                kube-system
	ced946dadbf7a       6e38f40d628db       9 minutes ago       Exited              storage-provisioner                    1                   2a7b73727c50b       storage-provisioner                                     kube-system
	2b86f1c041410       36eef8e07bdd6       9 minutes ago       Running             kube-proxy                             1                   6c0050aa200d4       kube-proxy-vhml9                                        kube-system
	26b15c351a7f5       a3e246e9556e9       9 minutes ago       Running             etcd                                   1                   3d3fe1695a330       etcd-default-k8s-diff-port-382606                       kube-system
	417d2eb47c0a9       aec12dadf56dd       9 minutes ago       Running             kube-scheduler                         1                   0ca1b2caa6989       kube-scheduler-default-k8s-diff-port-382606             kube-system
	518a94577bb7d       aa27095f56193       9 minutes ago       Running             kube-apiserver                         1                   39ec5ef103cca       kube-apiserver-default-k8s-diff-port-382606             kube-system
	9caa0d440527b       5826b25d990d7       9 minutes ago       Running             kube-controller-manager                1                   07eba73b830e6       kube-controller-manager-default-k8s-diff-port-382606    kube-system
	45c1210726c66       56cc512116c8f       11 minutes ago      Exited              busybox                                0                   c511096e8686d       busybox                                                 default
	bae993c63f9a1       52546a367cc9e       12 minutes ago      Exited              coredns                                0                   0a051dd2a97a2       coredns-66bc5c9577-bzq6s                                kube-system
	e26689632e68d       36eef8e07bdd6       12 minutes ago      Exited              kube-proxy                             0                   8b82aa902baee       kube-proxy-vhml9                                        kube-system
	4acb45618ed01       aa27095f56193       12 minutes ago      Exited              kube-apiserver                         0                   7d900779b2564       kube-apiserver-default-k8s-diff-port-382606             kube-system
	7a54851195b09       aec12dadf56dd       12 minutes ago      Exited              kube-scheduler                         0                   a3d5034d7c252       kube-scheduler-default-k8s-diff-port-382606             kube-system
	d61732768d3fb       a3e246e9556e9       12 minutes ago      Exited              etcd                                   0                   a070eff3fe2f6       etcd-default-k8s-diff-port-382606                       kube-system
	f5b37d825fd69       5826b25d990d7       12 minutes ago      Exited              kube-controller-manager                0                   259e85dc6f99e       kube-controller-manager-default-k8s-diff-port-382606    kube-system
	
	
	==> containerd <==
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.975625280Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda15483a8-253a-46ba-89cf-a7281f75888f/e11bfb213730cb86d3eb541d27aa238dea64dfca5a8d94c2bd926d545c9d6e2f/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.976680023Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod6b019719-0fa6-4169-a8d7-56eb6752bd14/503098129741b3077535c690ccb45bf024c8d611f90bec4ecbbe47b18c85deb3/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.977757606Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podfc30c752e5dce8dd9191842cbc279eb5/9caa0d440527b29d084b74cd9fa77197ce53354e034e86874876263937324b73/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.978776650Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3e588983-8f37-472c-8234-e7dd2e1a6a4a/89d545dd0db1733ee4daff06cee68794fa2612e7839e9455fde9fb6eabbb7ef2/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.981245147Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4ceeb65a-96a3-46f8-b5bb-9eee51c1d4a4/a0b5497708d9ced75d92291f6ac713d36cce94b49103ad89b2dc84b7aa7aa541/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.983073646Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2d4447ed-82a8-491a-a4e1-627981605a48/4b740253b2e42187c5c5011e1460253c9b2cd55a5b3261393ba8f6ef0a57337c/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.984394522Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podce3f0db8d16dacb79fc90e036faf5ce3/26b15c351a7f5021930cc9f51f1954766cef758efb477c6f239d48814b55ad89/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.985370811Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8bec61eb-4ec4-4f3f-abf1-d471842e5929/2b86f1c0414105cede5a3456f97249cb2d21efe909ab37873e3ce4a615c86eab/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.987260172Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4716d535-618f-4469-b896-418b93cfe8af/0d9d949e94e6f380d0ff910f058da5d15e70ebec0bbbf77042abc4fc76dd78d4/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.988639161Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod858ad0d3-1b87-42c8-9494-039b5e1da647/00cd7ad611cf47d9b49840544352ba45da7e52058115f4962ead6fd3e4db4d73/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.989636177Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod10e715ce-7edc-4af5-93e0-e975d561cdf3/8858312cf5133d1dadaf156295497a020ac7e467c5a2f7f19132111df9e8becd/hugetlb.2MB.events\""
	Dec 19 03:46:16 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:16.990858020Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod75e075711d0e80a5b7777d004254cc7c/518a94577bb7de5f6f62c647a74b67c644a3a4cea449ef09d71d4ad5df9ad912/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.011308426Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podfc30c752e5dce8dd9191842cbc279eb5/9caa0d440527b29d084b74cd9fa77197ce53354e034e86874876263937324b73/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.012870483Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3e588983-8f37-472c-8234-e7dd2e1a6a4a/89d545dd0db1733ee4daff06cee68794fa2612e7839e9455fde9fb6eabbb7ef2/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.013976004Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4ceeb65a-96a3-46f8-b5bb-9eee51c1d4a4/a0b5497708d9ced75d92291f6ac713d36cce94b49103ad89b2dc84b7aa7aa541/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.015397055Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2d4447ed-82a8-491a-a4e1-627981605a48/4b740253b2e42187c5c5011e1460253c9b2cd55a5b3261393ba8f6ef0a57337c/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.016598614Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podce3f0db8d16dacb79fc90e036faf5ce3/26b15c351a7f5021930cc9f51f1954766cef758efb477c6f239d48814b55ad89/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.017645919Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8bec61eb-4ec4-4f3f-abf1-d471842e5929/2b86f1c0414105cede5a3456f97249cb2d21efe909ab37873e3ce4a615c86eab/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.018687077Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4716d535-618f-4469-b896-418b93cfe8af/0d9d949e94e6f380d0ff910f058da5d15e70ebec0bbbf77042abc4fc76dd78d4/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.020059710Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod858ad0d3-1b87-42c8-9494-039b5e1da647/00cd7ad611cf47d9b49840544352ba45da7e52058115f4962ead6fd3e4db4d73/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.021134594Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod10e715ce-7edc-4af5-93e0-e975d561cdf3/8858312cf5133d1dadaf156295497a020ac7e467c5a2f7f19132111df9e8becd/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.022069667Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod75e075711d0e80a5b7777d004254cc7c/518a94577bb7de5f6f62c647a74b67c644a3a4cea449ef09d71d4ad5df9ad912/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.023052532Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/poded75af231424877e71cf9380aa17a357/417d2eb47c0a973814eca73db740808aaf83346035e5cd9c14ff6314a66d7849/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.024401022Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda15483a8-253a-46ba-89cf-a7281f75888f/e11bfb213730cb86d3eb541d27aa238dea64dfca5a8d94c2bd926d545c9d6e2f/hugetlb.2MB.events\""
	Dec 19 03:46:27 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:46:27.025744676Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod6b019719-0fa6-4169-a8d7-56eb6752bd14/503098129741b3077535c690ccb45bf024c8d611f90bec4ecbbe47b18c85deb3/hugetlb.2MB.events\""
	
	
	==> coredns [89d545dd0db1733ee4daff06cee68794fa2612e7839e9455fde9fb6eabbb7ef2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49729 - 57509 "HINFO IN 9161108537065804054.7799302224143394389. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017461273s
	
	
	==> coredns [bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:50735 - 17360 "HINFO IN 2174463226158819289.5172247921982077030. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017346048s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-382606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-382606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-382606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_34_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:34:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-382606
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:46:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:43:47 +0000   Fri, 19 Dec 2025 03:34:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:43:47 +0000   Fri, 19 Dec 2025 03:34:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:43:47 +0000   Fri, 19 Dec 2025 03:34:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:43:47 +0000   Fri, 19 Dec 2025 03:37:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.129
	  Hostname:    default-k8s-diff-port-382606
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 342506c19e124922943823d9d57eea28
	  System UUID:                342506c1-9e12-4922-9438-23d9d57eea28
	  Boot ID:                    7f2ea5ee-7aae-4716-9364-8ec21adb7cea
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 coredns-66bc5c9577-bzq6s                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     12m
	  kube-system                 etcd-default-k8s-diff-port-382606                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         12m
	  kube-system                 kube-apiserver-default-k8s-diff-port-382606              250m (12%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-382606     200m (10%)    0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-vhml9                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-default-k8s-diff-port-382606              100m (5%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-746fcd58dc-xphdl                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         11m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        kubernetes-dashboard-api-5444544855-rgb27                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	  kubernetes-dashboard        kubernetes-dashboard-auth-75d54f6f86-bnd95               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-wgdnx                0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m22s
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-kfx97    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-wwbc2                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    9m22s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m30s                  kube-proxy       
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x7 over 12m)      kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientPID
	  Normal   NodeReady                12m                    kubelet          Node default-k8s-diff-port-382606 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                    node-controller  Node default-k8s-diff-port-382606 event: Registered Node default-k8s-diff-port-382606 in Controller
	  Normal   Starting                 9m38s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  9m38s (x8 over 9m38s)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m38s (x8 over 9m38s)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m38s (x7 over 9m38s)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  9m38s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 9m32s                  kubelet          Node default-k8s-diff-port-382606 has been rebooted, boot id: 7f2ea5ee-7aae-4716-9364-8ec21adb7cea
	  Normal   RegisteredNode           9m27s                  node-controller  Node default-k8s-diff-port-382606 event: Registered Node default-k8s-diff-port-382606 in Controller
	
	
	==> dmesg <==
	[Dec19 03:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001605] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008324] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.758703] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.091309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104294] kauditd_printk_skb: 102 callbacks suppressed
	[Dec19 03:37] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000088] kauditd_printk_skb: 128 callbacks suppressed
	[  +3.473655] kauditd_printk_skb: 338 callbacks suppressed
	[  +6.923891] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.599835] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.729489] kauditd_printk_skb: 12 callbacks suppressed
	[ +13.266028] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> etcd [26b15c351a7f5021930cc9f51f1954766cef758efb477c6f239d48814b55ad89] <==
	{"level":"warn","ts":"2025-12-19T03:36:58.651804Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54542","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.668111Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.692914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.714758Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.739271Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.761192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.782198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54648","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.818656Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.833454Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.850275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.868907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.877249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.896080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.973742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.352322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.382031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.399357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.452062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.493578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.521013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.537079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.587673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.610112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.632754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.660242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	
	
	==> etcd [d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5] <==
	{"level":"warn","ts":"2025-12-19T03:34:04.298617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.314545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.327667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.343419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.367627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.383689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.391821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.405054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.426047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.437252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.447053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.458037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.473493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.479914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.492986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.504629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.516820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.526538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.546193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.563536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.574441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.586915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.598212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.679106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59886","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:34:17.949387Z","caller":"traceutil/trace.go:172","msg":"trace[941942705] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"182.138051ms","start":"2025-12-19T03:34:17.767230Z","end":"2025-12-19T03:34:17.949368Z","steps":["trace[941942705] 'process raft request'  (duration: 178.491344ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:46:32 up 9 min,  0 users,  load average: 0.12, 0.28, 0.19
	Linux default-k8s-diff-port-382606 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6] <==
	I1219 03:34:07.918995       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:34:07.949209       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:34:12.651281       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:34:13.006263       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:34:13.015238       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:34:13.309617       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:35:06.602579       1 conn.go:339] Error on socket receive: read tcp 192.168.72.129:8444->192.168.72.1:48290: use of closed network connection
	I1219 03:35:07.259368       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:35:07.267392       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:35:07.267522       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:35:07.267794       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:35:07.434940       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.109.46.149"}
	W1219 03:35:07.451685       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:35:07.451874       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:35:07.458566       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:35:07.458618       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [518a94577bb7de5f6f62c647a74b67c644a3a4cea449ef09d71d4ad5df9ad912] <==
	E1219 03:42:00.965662       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:42:00.965676       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:42:00.965885       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:42:00.967113       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:43:00.966983       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:43:00.967104       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:43:00.967183       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:43:00.967471       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:43:00.968264       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:43:00.968322       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:45:00.968411       1 handler_proxy.go:99] no RequestInfo found in the context
	W1219 03:45:00.968438       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:45:00.968574       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	E1219 03:45:00.968591       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:45:00.968597       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:45:00.969771       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9caa0d440527b29d084b74cd9fa77197ce53354e034e86874876263937324b73] <==
	I1219 03:40:06.812950       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:40:36.680130       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:40:36.823861       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:41:06.686106       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:41:06.834839       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:41:36.693054       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:41:36.846356       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:42:06.700855       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:42:06.858222       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:42:36.708341       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:42:36.868374       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:43:06.713701       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:43:06.877943       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:43:36.721001       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:43:36.887481       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:44:06.727547       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:44:06.898210       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:44:36.734078       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:44:36.910664       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:45:06.740074       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:45:06.921182       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:45:36.745167       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:45:36.931449       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:46:06.751744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:46:06.952390       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c] <==
	I1219 03:34:12.353245       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:34:12.353348       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:34:12.357849       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:34:12.363113       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:34:12.368603       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-382606" podCIDRs=["10.244.0.0/24"]
	I1219 03:34:12.371147       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:34:12.395931       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:34:12.396045       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 03:34:12.396077       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:34:12.396605       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1219 03:34:12.397077       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:34:12.397259       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:34:12.397265       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:34:12.397627       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:34:12.398009       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:34:12.399266       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:34:12.399529       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:34:12.399544       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 03:34:12.400127       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:34:12.403873       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:34:12.406184       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:34:12.409683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:34:12.409694       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:34:12.409698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:34:12.416021       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [2b86f1c0414105cede5a3456f97249cb2d21efe909ab37873e3ce4a615c86eab] <==
	I1219 03:37:01.799419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:37:01.900130       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:37:01.900220       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.129"]
	E1219 03:37:01.900457       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:37:01.961090       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:37:01.961150       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:37:01.961192       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:37:01.977893       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:37:01.980700       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:37:01.980716       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:37:01.989122       1 config.go:200] "Starting service config controller"
	I1219 03:37:01.989158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:37:01.989210       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:37:01.989217       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:37:01.989244       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:37:01.989248       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:37:01.990428       1 config.go:309] "Starting node config controller"
	I1219 03:37:01.990459       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:37:01.990465       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:37:02.089889       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:37:02.090284       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:37:02.089890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3] <==
	I1219 03:34:15.006486       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:34:15.109902       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:34:15.109972       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.129"]
	E1219 03:34:15.110290       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:34:15.302728       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:34:15.302878       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:34:15.302927       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:34:15.312759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:34:15.313145       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:34:15.313160       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:34:15.318572       1 config.go:200] "Starting service config controller"
	I1219 03:34:15.318597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:34:15.318612       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:34:15.318615       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:34:15.318624       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:34:15.318627       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:34:15.319111       1 config.go:309] "Starting node config controller"
	I1219 03:34:15.319117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:34:15.319127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:34:15.419138       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:34:15.419225       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:34:15.419689       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [417d2eb47c0a973814eca73db740808aaf83346035e5cd9c14ff6314a66d7849] <==
	I1219 03:36:57.691193       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:36:59.833443       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:36:59.833582       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:36:59.833599       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:36:59.833606       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:36:59.894931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:36:59.896570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:36:59.908197       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:36:59.913129       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:59.913408       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:59.915317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 03:36:59.950227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1219 03:37:01.316677       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead] <==
	E1219 03:34:05.426747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:34:05.426838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:34:05.426896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:34:05.426945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:34:05.426985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:34:05.427030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:34:05.427070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:34:05.427118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:34:05.427165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:34:05.427207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:34:05.427251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:34:05.428123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:34:05.428644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:34:05.428974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:34:05.430468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:34:06.239376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:34:06.243449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:34:06.395663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:34:06.442628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:34:06.546385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:34:06.555806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:34:06.657760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:34:06.673910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:34:06.677531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1219 03:34:09.096650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:41:45 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:41:45.766089    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:41:59 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:41:59.766168    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:42:13 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:42:13.765903    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:42:27 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:42:27.766670    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:42:42 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:42:42.767265    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:42:55 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:42:55.766194    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:43:10 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:10.775093    1087 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:43:10 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:10.775161    1087 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:43:10 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:10.775254    1087 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-xphdl_kube-system(fb637b66-cb31-46cc-b490-110c2825cacc): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 19 03:43:10 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:10.775290    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:43:24 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:24.766435    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:43:36 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:36.767365    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:43:48 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:48.767139    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:43:59 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:43:59.765852    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:44:11 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:44:11.765932    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:44:26 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:44:26.765801    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:44:41 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:44:41.765220    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:44:53 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:44:53.765338    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:45:06 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:45:06.765850    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:45:20 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:45:20.766327    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:45:35 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:45:35.766176    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:45:48 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:45:48.768438    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:46:01 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:46:01.766330    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:46:13 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:46:13.766814    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:46:26 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:46:26.767736    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	
	
	==> kubernetes-dashboard [0d9d949e94e6f380d0ff910f058da5d15e70ebec0bbbf77042abc4fc76dd78d4] <==
	I1219 03:37:28.427541       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:37:28.427793       1 init.go:48] Using in-cluster config
	I1219 03:37:28.428240       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [4b740253b2e42187c5c5011e1460253c9b2cd55a5b3261393ba8f6ef0a57337c] <==
	10.244.0.1 - - [19/Dec/2025:03:43:51 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:00 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:10 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:20 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:21 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:44:30 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:40 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:50 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:44:51 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:45:00 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:10 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:20 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:45:30 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:40 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:50 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:45:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:46:00 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:46:10 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:46:20 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:46:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:46:30 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	E1219 03:44:18.357318       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:45:18.354158       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:46:18.352696       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [503098129741b3077535c690ccb45bf024c8d611f90bec4ecbbe47b18c85deb3] <==
	I1219 03:37:21.811935       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:37:21.812047       1 init.go:49] Using in-cluster config
	I1219 03:37:21.812747       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:37:21.812778       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:37:21.812784       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:37:21.812788       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:37:21.881419       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:37:21.881477       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:37:21.890563       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:37:21.896833       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [a0b5497708d9ced75d92291f6ac713d36cce94b49103ad89b2dc84b7aa7aa541] <==
	I1219 03:37:15.110759       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:37:15.111110       1 init.go:49] Using in-cluster config
	I1219 03:37:15.111337       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [8858312cf5133d1dadaf156295497a020ac7e467c5a2f7f19132111df9e8becd] <==
	W1219 03:46:08.188311       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:10.192898       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:10.199791       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:12.205672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:12.212816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:14.218308       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:14.225432       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:16.230071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:16.239214       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:18.243804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:18.251826       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:20.254821       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:20.261044       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:22.266775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:22.272756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:24.276356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:24.284725       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:26.288480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:26.294626       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:28.298813       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:28.306395       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:30.311157       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:30.321312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:32.325613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:46:32.334840       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ced946dadbf7a9872e9726febd61276e7a03119f9bb6394671740bb262877814] <==
	I1219 03:37:01.658661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:37:31.667875       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-xphdl
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 describe pod metrics-server-746fcd58dc-xphdl
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-382606 describe pod metrics-server-746fcd58dc-xphdl: exit status 1 (68.418705ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-xphdl" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-382606 describe pod metrics-server-746fcd58dc-xphdl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (542.66s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.71s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:45:21.169399    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-638861 -n old-k8s-version-638861
start_stop_delete_test.go:285: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:54:20.286686733 +0000 UTC m=+5347.101315551
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-638861 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context old-k8s-version-638861 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (62.768519ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-638861 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-638861 -n old-k8s-version-638861
helpers_test.go:253: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-638861 logs -n 25
E1219 03:54:20.792251    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-638861 logs -n 25: (1.71502829s)
helpers_test.go:261: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────
───────────┐
	│ COMMAND │                                                                                                                       ARGS                                                                                                                        │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────
───────────┤
	│ ssh     │ -p bridge-694633 sudo cat /etc/containerd/config.toml                                                                                                                                                                                             │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo containerd config dump                                                                                                                                                                                                      │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │                     │
	│ ssh     │ -p bridge-694633 sudo systemctl cat crio --no-pager                                                                                                                                                                                               │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                     │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ ssh     │ -p bridge-694633 sudo crio config                                                                                                                                                                                                                 │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p bridge-694633                                                                                                                                                                                                                                  │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p disable-driver-mounts-477416                                                                                                                                                                                                                   │ disable-driver-mounts-477416 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ stop    │ -p old-k8s-version-638861 --alsologtostderr -v=3                                                                                                                                                                                                  │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-728806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p no-preload-728806 --alsologtostderr -v=3                                                                                                                                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p embed-certs-832734 --alsologtostderr -v=3                                                                                                                                                                                                      │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ stop    │ -p default-k8s-diff-port-382606 --alsologtostderr -v=3                                                                                                                                                                                            │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0 │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                  │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                     │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                        │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                      │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────
───────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:36:29
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:36:29.621083   51711 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:36:29.621200   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621205   51711 out.go:374] Setting ErrFile to fd 2...
	I1219 03:36:29.621212   51711 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:36:29.621491   51711 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:36:29.622131   51711 out.go:368] Setting JSON to false
	I1219 03:36:29.623408   51711 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4729,"bootTime":1766110661,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:36:29.623486   51711 start.go:143] virtualization: kvm guest
	I1219 03:36:29.625670   51711 out.go:179] * [default-k8s-diff-port-382606] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:36:29.633365   51711 notify.go:221] Checking for updates...
	I1219 03:36:29.633417   51711 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:36:29.635075   51711 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:36:29.636942   51711 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:29.638374   51711 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:36:29.639842   51711 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:36:29.641026   51711 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:36:29.642747   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:29.643478   51711 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:36:29.700163   51711 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 03:36:29.701162   51711 start.go:309] selected driver: kvm2
	I1219 03:36:29.701180   51711 start.go:928] validating driver "kvm2" against &{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNo
deRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.701323   51711 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:36:29.702837   51711 start_flags.go:993] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:29.702885   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:29.702957   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:29.703020   51711 start.go:353] cluster config:
	{Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:29.703150   51711 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:36:29.704494   51711 out.go:179] * Starting "default-k8s-diff-port-382606" primary control-plane node in "default-k8s-diff-port-382606" cluster
	I1219 03:36:29.705691   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:29.705751   51711 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 03:36:29.705771   51711 cache.go:65] Caching tarball of preloaded images
	I1219 03:36:29.705892   51711 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:36:29.705927   51711 cache.go:68] Finished verifying existence of preloaded tar for v1.34.3 on containerd
	I1219 03:36:29.706078   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:29.706318   51711 start.go:360] acquireMachinesLock for default-k8s-diff-port-382606: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:36:29.706374   51711 start.go:364] duration metric: took 32.309µs to acquireMachinesLock for "default-k8s-diff-port-382606"
	I1219 03:36:29.706388   51711 start.go:96] Skipping create...Using existing machine configuration
	I1219 03:36:29.706395   51711 fix.go:54] fixHost starting: 
	I1219 03:36:29.708913   51711 fix.go:112] recreateIfNeeded on default-k8s-diff-port-382606: state=Stopped err=<nil>
	W1219 03:36:29.708943   51711 fix.go:138] unexpected machine state, will restart: <nil>
	I1219 03:36:27.974088   51386 addons.go:239] Setting addon default-storageclass=true in "embed-certs-832734"
	W1219 03:36:27.974109   51386 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:36:27.974136   51386 host.go:66] Checking if "embed-certs-832734" exists ...
	I1219 03:36:27.974565   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:36:27.974582   51386 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:36:27.974599   51386 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:27.974608   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:36:27.976663   51386 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:27.976691   51386 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:36:27.976771   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.977846   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.977880   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.978136   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.979376   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979747   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.979820   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.979860   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980122   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.980448   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.980482   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.980686   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:27.981056   51386 main.go:144] libmachine: domain embed-certs-832734 has defined MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981521   51386 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:50:d6:26", ip: ""} in network mk-embed-certs-832734: {Iface:virbr5 ExpiryTime:2025-12-19 04:36:09 +0000 UTC Type:0 Mac:52:54:00:50:d6:26 Iaid: IPaddr:192.168.83.196 Prefix:24 Hostname:embed-certs-832734 Clientid:01:52:54:00:50:d6:26}
	I1219 03:36:27.981545   51386 main.go:144] libmachine: domain embed-certs-832734 has defined IP address 192.168.83.196 and MAC address 52:54:00:50:d6:26 in network mk-embed-certs-832734
	I1219 03:36:27.981792   51386 sshutil.go:53] new ssh client: &{IP:192.168.83.196 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/embed-certs-832734/id_rsa Username:docker}
	I1219 03:36:28.331935   51386 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:28.393904   51386 node_ready.go:35] waiting up to 6m0s for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398272   51386 node_ready.go:49] node "embed-certs-832734" is "Ready"
	I1219 03:36:28.398297   51386 node_ready.go:38] duration metric: took 4.336343ms for node "embed-certs-832734" to be "Ready" ...
	I1219 03:36:28.398310   51386 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:28.398457   51386 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:28.475709   51386 api_server.go:72] duration metric: took 507.310055ms to wait for apiserver process to appear ...
	I1219 03:36:28.475751   51386 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:28.475776   51386 api_server.go:253] Checking apiserver healthz at https://192.168.83.196:8443/healthz ...
	I1219 03:36:28.483874   51386 api_server.go:279] https://192.168.83.196:8443/healthz returned 200:
	ok
	I1219 03:36:28.485710   51386 api_server.go:141] control plane version: v1.34.3
	I1219 03:36:28.485738   51386 api_server.go:131] duration metric: took 9.978141ms to wait for apiserver health ...
	I1219 03:36:28.485751   51386 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:36:28.493956   51386 system_pods.go:59] 8 kube-system pods found
	I1219 03:36:28.493996   51386 system_pods.go:61] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.494024   51386 system_pods.go:61] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.494037   51386 system_pods.go:61] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.494044   51386 system_pods.go:61] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.494052   51386 system_pods.go:61] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.494058   51386 system_pods.go:61] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.494064   51386 system_pods.go:61] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.494074   51386 system_pods.go:61] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.494080   51386 system_pods.go:74] duration metric: took 8.32329ms to wait for pod list to return data ...
	I1219 03:36:28.494090   51386 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:36:28.500269   51386 default_sa.go:45] found service account: "default"
	I1219 03:36:28.500298   51386 default_sa.go:55] duration metric: took 6.200379ms for default service account to be created ...
	I1219 03:36:28.500309   51386 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:36:28.601843   51386 system_pods.go:86] 8 kube-system pods found
	I1219 03:36:28.601871   51386 system_pods.go:89] "coredns-66bc5c9577-4csbt" [742c8d21-619e-4ced-af0f-72f096b866e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:36:28.601880   51386 system_pods.go:89] "etcd-embed-certs-832734" [34433760-fa3f-4045-95f0-62a4ae5f69ce] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:36:28.601887   51386 system_pods.go:89] "kube-apiserver-embed-certs-832734" [b72cf539-ac30-42a7-82fa-df6084947f04] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:36:28.601892   51386 system_pods.go:89] "kube-controller-manager-embed-certs-832734" [0f7ee37d-bdca-43c8-8c15-78faaa4143fb] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:36:28.601896   51386 system_pods.go:89] "kube-proxy-j49gn" [dba889cc-f53c-47fe-ae78-cb48e17b1acb] Running
	I1219 03:36:28.601902   51386 system_pods.go:89] "kube-scheduler-embed-certs-832734" [39077f2a-3026-4449-97fe-85a7bf029b42] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:36:28.601921   51386 system_pods.go:89] "metrics-server-746fcd58dc-kcjq7" [3df93f50-47ae-4697-9567-9a02426c3a6c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:36:28.601930   51386 system_pods.go:89] "storage-provisioner" [efd499d3-fc07-4168-a175-0bee365b79f1] Running
	I1219 03:36:28.601938   51386 system_pods.go:126] duration metric: took 101.621956ms to wait for k8s-apps to be running ...
	I1219 03:36:28.601947   51386 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:36:28.602031   51386 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:36:28.618616   51386 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:36:28.685146   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:36:28.685175   51386 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:36:28.694410   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:36:28.696954   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:36:28.726390   51386 system_svc.go:56] duration metric: took 124.434217ms WaitForService to wait for kubelet
	I1219 03:36:28.726426   51386 kubeadm.go:587] duration metric: took 758.032732ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:36:28.726450   51386 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:36:28.726520   51386 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:36:28.739364   51386 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:36:28.739393   51386 node_conditions.go:123] node cpu capacity is 2
	I1219 03:36:28.739407   51386 node_conditions.go:105] duration metric: took 12.951551ms to run NodePressure ...
	I1219 03:36:28.739421   51386 start.go:242] waiting for startup goroutines ...
	I1219 03:36:28.774949   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:36:28.774981   51386 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:36:28.896758   51386 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:28.896785   51386 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:36:29.110522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:36:31.016418   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.319423876s)
	I1219 03:36:31.016497   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.322025841s)
	I1219 03:36:31.016534   51386 ssh_runner.go:235] Completed: test -f /usr/local/bin/helm: (2.28998192s)
	I1219 03:36:31.016597   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.906047637s)
	I1219 03:36:31.016610   51386 addons.go:500] Verifying addon metrics-server=true in "embed-certs-832734"
	I1219 03:36:31.016613   51386 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	I1219 03:36:29.711054   51711 out.go:252] * Restarting existing kvm2 VM for "default-k8s-diff-port-382606" ...
	I1219 03:36:29.711101   51711 main.go:144] libmachine: starting domain...
	I1219 03:36:29.711116   51711 main.go:144] libmachine: ensuring networks are active...
	I1219 03:36:29.712088   51711 main.go:144] libmachine: Ensuring network default is active
	I1219 03:36:29.712549   51711 main.go:144] libmachine: Ensuring network mk-default-k8s-diff-port-382606 is active
	I1219 03:36:29.713312   51711 main.go:144] libmachine: getting domain XML...
	I1219 03:36:29.714943   51711 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>default-k8s-diff-port-382606</name>
	  <uuid>342506c1-9e12-4922-9438-23d9d57eea28</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/default-k8s-diff-port-382606.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:fb:a4:4e'/>
	      <source network='mk-default-k8s-diff-port-382606'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:57:4f:b6'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:36:31.342655   51711 main.go:144] libmachine: waiting for domain to start...
	I1219 03:36:31.345734   51711 main.go:144] libmachine: domain is now running
	I1219 03:36:31.345778   51711 main.go:144] libmachine: waiting for IP...
	I1219 03:36:31.347227   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348141   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has current primary IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.348163   51711 main.go:144] libmachine: found domain IP: 192.168.72.129
	I1219 03:36:31.348170   51711 main.go:144] libmachine: reserving static IP address...
	I1219 03:36:31.348677   51711 main.go:144] libmachine: found host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.348704   51711 main.go:144] libmachine: skip adding static IP to network mk-default-k8s-diff-port-382606 - found existing host DHCP lease matching {name: "default-k8s-diff-port-382606", mac: "52:54:00:fb:a4:4e", ip: "192.168.72.129"}
	I1219 03:36:31.348713   51711 main.go:144] libmachine: reserved static IP address 192.168.72.129 for domain default-k8s-diff-port-382606
	I1219 03:36:31.348731   51711 main.go:144] libmachine: waiting for SSH...
	I1219 03:36:31.348741   51711 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:36:31.351582   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352122   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:33:41 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:31.352155   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:31.352422   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:31.352772   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:31.352782   51711 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:36:34.417281   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:31.980522   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	I1219 03:36:35.707529   51386 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.726958549s)
	I1219 03:36:35.707614   51386 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:36:36.641432   51386 addons.go:500] Verifying addon dashboard=true in "embed-certs-832734"
	I1219 03:36:36.645285   51386 out.go:179] * Verifying dashboard addon...
	I1219 03:36:36.647847   51386 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:36:36.659465   51386 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:36:36.659491   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.154819   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:37.652042   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.152461   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:38.651730   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.152475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:39.652155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.153311   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.652427   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:41.151837   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:40.497282   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: no route to host
	I1219 03:36:43.498703   51711 main.go:144] libmachine: Error dialing TCP: dial tcp 192.168.72.129:22: connect: connection refused
	I1219 03:36:41.654155   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.154727   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:42.653186   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.152647   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:43.651177   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.154241   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:44.651752   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:45.152244   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.124796   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.151832   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:46.628602   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:46.632304   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.632730   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.632753   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.633056   51711 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/config.json ...
	I1219 03:36:46.633240   51711 machine.go:94] provisionDockerMachine start ...
	I1219 03:36:46.635441   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.635889   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.635934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.636109   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.636298   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.636308   51711 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:36:46.752911   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:36:46.752937   51711 buildroot.go:166] provisioning hostname "default-k8s-diff-port-382606"
	I1219 03:36:46.756912   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757425   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.757463   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.757703   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.757935   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.757955   51711 main.go:144] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-382606 && echo "default-k8s-diff-port-382606" | sudo tee /etc/hostname
	I1219 03:36:46.902266   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-382606
	
	I1219 03:36:46.905791   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906293   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:46.906323   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:46.906555   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:46.906758   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:46.906774   51711 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-382606' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-382606/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-382606' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:36:47.045442   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:36:47.045472   51711 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5003/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5003/.minikube}
	I1219 03:36:47.045496   51711 buildroot.go:174] setting up certificates
	I1219 03:36:47.045505   51711 provision.go:84] configureAuth start
	I1219 03:36:47.049643   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.050087   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.050115   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.052980   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053377   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.053417   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.053596   51711 provision.go:143] copyHostCerts
	I1219 03:36:47.053653   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem, removing ...
	I1219 03:36:47.053678   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem
	I1219 03:36:47.053772   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem (1082 bytes)
	I1219 03:36:47.053902   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem, removing ...
	I1219 03:36:47.053919   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem
	I1219 03:36:47.053949   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem (1123 bytes)
	I1219 03:36:47.054027   51711 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem, removing ...
	I1219 03:36:47.054036   51711 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem
	I1219 03:36:47.054059   51711 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem (1675 bytes)
	I1219 03:36:47.054113   51711 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-382606 san=[127.0.0.1 192.168.72.129 default-k8s-diff-port-382606 localhost minikube]
	I1219 03:36:47.093786   51711 provision.go:177] copyRemoteCerts
	I1219 03:36:47.093848   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:36:47.096938   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097402   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.097443   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.097608   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.187589   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:36:47.229519   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1219 03:36:47.264503   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1219 03:36:47.294746   51711 provision.go:87] duration metric: took 249.22829ms to configureAuth
	I1219 03:36:47.294772   51711 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:36:47.294974   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:36:47.294990   51711 machine.go:97] duration metric: took 661.738495ms to provisionDockerMachine
	I1219 03:36:47.295000   51711 start.go:293] postStartSetup for "default-k8s-diff-port-382606" (driver="kvm2")
	I1219 03:36:47.295020   51711 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:36:47.295079   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:36:47.297915   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298388   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.298414   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.298592   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.391351   51711 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:36:47.396636   51711 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:36:47.396664   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/addons for local assets ...
	I1219 03:36:47.396734   51711 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/files for local assets ...
	I1219 03:36:47.396833   51711 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem -> 89782.pem in /etc/ssl/certs
	I1219 03:36:47.396981   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:36:47.414891   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:47.450785   51711 start.go:296] duration metric: took 155.770681ms for postStartSetup
	I1219 03:36:47.450829   51711 fix.go:56] duration metric: took 17.744433576s for fixHost
	I1219 03:36:47.453927   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454408   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.454438   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.454581   51711 main.go:144] libmachine: Using SSH client type: native
	I1219 03:36:47.454774   51711 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.72.129 22 <nil> <nil>}
	I1219 03:36:47.454784   51711 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:36:47.578960   51711 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766115407.541226750
	
	I1219 03:36:47.578984   51711 fix.go:216] guest clock: 1766115407.541226750
	I1219 03:36:47.578993   51711 fix.go:229] Guest: 2025-12-19 03:36:47.54122675 +0000 UTC Remote: 2025-12-19 03:36:47.450834556 +0000 UTC m=+17.907032910 (delta=90.392194ms)
	I1219 03:36:47.579033   51711 fix.go:200] guest clock delta is within tolerance: 90.392194ms
	I1219 03:36:47.579039   51711 start.go:83] releasing machines lock for "default-k8s-diff-port-382606", held for 17.872657006s
	I1219 03:36:47.582214   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.582699   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.582737   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.583361   51711 ssh_runner.go:195] Run: cat /version.json
	I1219 03:36:47.583439   51711 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:36:47.586735   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.586965   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587209   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587236   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587400   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.587637   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:47.587663   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:47.587852   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:36:47.701374   51711 ssh_runner.go:195] Run: systemctl --version
	I1219 03:36:47.707956   51711 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:36:47.714921   51711 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:36:47.714993   51711 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:36:47.736464   51711 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:36:47.736487   51711 start.go:496] detecting cgroup driver to use...
	I1219 03:36:47.736550   51711 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:36:47.771913   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:36:47.789225   51711 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:36:47.789292   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:36:47.814503   51711 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:36:47.832961   51711 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:36:48.004075   51711 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:36:48.227207   51711 docker.go:234] disabling docker service ...
	I1219 03:36:48.227297   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:36:48.245923   51711 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:36:48.261992   51711 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:36:48.443743   51711 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:36:48.627983   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:36:48.647391   51711 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:36:48.673139   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:36:48.690643   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:36:48.703896   51711 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 03:36:48.703949   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 03:36:48.718567   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.732932   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:36:48.749170   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:36:48.772676   51711 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:36:48.787125   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:36:48.800190   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:36:48.812900   51711 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:36:48.826147   51711 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:36:48.841046   51711 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:36:48.841107   51711 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:36:48.867440   51711 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:36:48.879351   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:49.048166   51711 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:36:49.092003   51711 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:36:49.092122   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:49.098374   51711 retry.go:31] will retry after 1.402478088s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1219 03:36:50.501086   51711 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:36:50.509026   51711 start.go:564] Will wait 60s for crictl version
	I1219 03:36:50.509089   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:50.514426   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:36:50.554888   51711 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:36:50.554956   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.583326   51711 ssh_runner.go:195] Run: containerd --version
	I1219 03:36:50.611254   51711 out.go:179] * Preparing Kubernetes v1.34.3 on containerd 2.2.0 ...
	I1219 03:36:46.651075   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.206126   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:47.654221   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.152458   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:48.651475   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.152863   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:49.655859   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.152073   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:50.655613   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.153352   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:51.653895   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.151537   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:52.653336   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.156131   51386 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:36:53.652752   51386 kapi.go:107] duration metric: took 17.00490252s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:36:53.654689   51386 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p embed-certs-832734 addons enable metrics-server
	
	I1219 03:36:53.656077   51386 out.go:179] * Enabled addons: storage-provisioner, metrics-server, default-storageclass, dashboard
	I1219 03:36:50.615098   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615498   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:36:50.615532   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:36:50.615798   51711 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I1219 03:36:50.620834   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.637469   51711 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:fal
se ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:36:50.637614   51711 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 03:36:50.637684   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.668556   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.668578   51711 containerd.go:534] Images already preloaded, skipping extraction
	I1219 03:36:50.668632   51711 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:36:50.703466   51711 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:36:50.703488   51711 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:36:50.703495   51711 kubeadm.go:935] updating node { 192.168.72.129 8444 v1.34.3 containerd true true} ...
	I1219 03:36:50.703585   51711 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-382606 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.129
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:36:50.703648   51711 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:36:50.734238   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:36:50.734260   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:36:50.734277   51711 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1219 03:36:50.734306   51711 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.129 APIServerPort:8444 KubernetesVersion:v1.34.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-382606 NodeName:default-k8s-diff-port-382606 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.129"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.129 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mini
kube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:36:50.734471   51711 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.129
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "default-k8s-diff-port-382606"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.72.129"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.129"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:36:50.734558   51711 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.3
	I1219 03:36:50.746945   51711 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:36:50.746995   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:36:50.758948   51711 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (334 bytes)
	I1219 03:36:50.782923   51711 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1219 03:36:50.807164   51711 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2247 bytes)
	I1219 03:36:50.829562   51711 ssh_runner.go:195] Run: grep 192.168.72.129	control-plane.minikube.internal$ /etc/hosts
	I1219 03:36:50.833888   51711 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.129	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:36:50.849703   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:36:51.014216   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:36:51.062118   51711 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606 for IP: 192.168.72.129
	I1219 03:36:51.062147   51711 certs.go:195] generating shared ca certs ...
	I1219 03:36:51.062168   51711 certs.go:227] acquiring lock for ca certs: {Name:mk6db7e23547b9013e447eaa0ddba18e05213211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.062409   51711 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key
	I1219 03:36:51.062517   51711 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key
	I1219 03:36:51.062542   51711 certs.go:257] generating profile certs ...
	I1219 03:36:51.062681   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/client.key
	I1219 03:36:51.062791   51711 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key.13c41c2b
	I1219 03:36:51.062855   51711 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key
	I1219 03:36:51.063062   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem (1338 bytes)
	W1219 03:36:51.063113   51711 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978_empty.pem, impossibly tiny 0 bytes
	I1219 03:36:51.063130   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:36:51.063176   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:36:51.063218   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:36:51.063256   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem (1675 bytes)
	I1219 03:36:51.063324   51711 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:36:51.064049   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:36:51.108621   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:36:51.164027   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:36:51.199337   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:36:51.234216   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1219 03:36:51.283158   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1219 03:36:51.314148   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:36:51.344498   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/default-k8s-diff-port-382606/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1219 03:36:51.374002   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:36:51.403858   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem --> /usr/share/ca-certificates/8978.pem (1338 bytes)
	I1219 03:36:51.438346   51711 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /usr/share/ca-certificates/89782.pem (1708 bytes)
	I1219 03:36:51.476174   51711 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:36:51.499199   51711 ssh_runner.go:195] Run: openssl version
	I1219 03:36:51.506702   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.518665   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8978.pem /etc/ssl/certs/8978.pem
	I1219 03:36:51.530739   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536107   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:37 /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.536167   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8978.pem
	I1219 03:36:51.543417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:36:51.554750   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8978.pem /etc/ssl/certs/51391683.0
	I1219 03:36:51.566106   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.577342   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89782.pem /etc/ssl/certs/89782.pem
	I1219 03:36:51.588583   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594342   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:37 /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.594386   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89782.pem
	I1219 03:36:51.602417   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.614493   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89782.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:36:51.626108   51711 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.638273   51711 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:36:51.650073   51711 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655546   51711 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.655600   51711 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:36:51.662728   51711 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:36:51.675457   51711 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:36:51.687999   51711 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:36:51.693178   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1219 03:36:51.700656   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1219 03:36:51.708623   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1219 03:36:51.715865   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1219 03:36:51.725468   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1219 03:36:51.732847   51711 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1219 03:36:51.739988   51711 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-382606 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.34.3 ClusterName:default-k8s-diff-port-382606 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false
ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:36:51.740068   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:36:51.740145   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.779756   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.779780   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.779786   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.779790   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.779794   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.779800   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.779804   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.779808   51711 cri.go:92] found id: ""
	I1219 03:36:51.779864   51711 ssh_runner.go:195] Run: sudo runc --root /run/containerd/runc/k8s.io list -f json
	W1219 03:36:51.796814   51711 kubeadm.go:408] unpause failed: list paused: runc: sudo runc --root /run/containerd/runc/k8s.io list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:36:51Z" level=error msg="open /run/containerd/runc/k8s.io: no such file or directory"
	I1219 03:36:51.796914   51711 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:36:51.809895   51711 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1219 03:36:51.809912   51711 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1219 03:36:51.809956   51711 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1219 03:36:51.821465   51711 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:36:51.822684   51711 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-382606" does not appear in /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:36:51.823576   51711 kubeconfig.go:62] /home/jenkins/minikube-integration/22230-5003/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-382606" cluster setting kubeconfig missing "default-k8s-diff-port-382606" context setting]
	I1219 03:36:51.824679   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:36:51.826925   51711 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1219 03:36:51.838686   51711 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.72.129
	I1219 03:36:51.838723   51711 kubeadm.go:1161] stopping kube-system containers ...
	I1219 03:36:51.838740   51711 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system]}
	I1219 03:36:51.838793   51711 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:36:51.874959   51711 cri.go:92] found id: "64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372"
	I1219 03:36:51.874981   51711 cri.go:92] found id: "bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe"
	I1219 03:36:51.874995   51711 cri.go:92] found id: "e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3"
	I1219 03:36:51.874998   51711 cri.go:92] found id: "4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6"
	I1219 03:36:51.875001   51711 cri.go:92] found id: "7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead"
	I1219 03:36:51.875004   51711 cri.go:92] found id: "d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5"
	I1219 03:36:51.875019   51711 cri.go:92] found id: "f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c"
	I1219 03:36:51.875022   51711 cri.go:92] found id: ""
	I1219 03:36:51.875027   51711 cri.go:255] Stopping containers: [64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c]
	I1219 03:36:51.875080   51711 ssh_runner.go:195] Run: which crictl
	I1219 03:36:51.879700   51711 ssh_runner.go:195] Run: sudo /usr/bin/crictl stop --timeout=10 64173857245534610ad0b219dde49e4d3132d78dc570c94e3855dbad045be372 bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3 4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6 7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5 f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c
	I1219 03:36:51.939513   51711 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1219 03:36:51.985557   51711 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:36:51.999714   51711 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:36:51.999739   51711 kubeadm.go:158] found existing configuration files:
	
	I1219 03:36:51.999807   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1219 03:36:52.011529   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:36:52.011594   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:36:52.023630   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1219 03:36:52.036507   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:36:52.036566   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:36:52.048019   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.061421   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:36:52.061498   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:36:52.073436   51711 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1219 03:36:52.084186   51711 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:36:52.084244   51711 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:36:52.098426   51711 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:36:52.111056   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:52.261515   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.323343   51711 ssh_runner.go:235] Completed: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (2.061779829s)
	I1219 03:36:54.323428   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.593075   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:53.657242   51386 addons.go:546] duration metric: took 25.688774629s for enable addons: enabled=[storage-provisioner metrics-server default-storageclass dashboard]
	I1219 03:36:53.657289   51386 start.go:247] waiting for cluster config update ...
	I1219 03:36:53.657306   51386 start.go:256] writing updated cluster config ...
	I1219 03:36:53.657575   51386 ssh_runner.go:195] Run: rm -f paused
	I1219 03:36:53.663463   51386 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:53.667135   51386 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.672738   51386 pod_ready.go:94] pod "coredns-66bc5c9577-4csbt" is "Ready"
	I1219 03:36:53.672765   51386 pod_ready.go:86] duration metric: took 5.607283ms for pod "coredns-66bc5c9577-4csbt" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.675345   51386 pod_ready.go:83] waiting for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.679709   51386 pod_ready.go:94] pod "etcd-embed-certs-832734" is "Ready"
	I1219 03:36:53.679732   51386 pod_ready.go:86] duration metric: took 4.36675ms for pod "etcd-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.681513   51386 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.685784   51386 pod_ready.go:94] pod "kube-apiserver-embed-certs-832734" is "Ready"
	I1219 03:36:53.685803   51386 pod_ready.go:86] duration metric: took 4.273628ms for pod "kube-apiserver-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:53.688112   51386 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.068844   51386 pod_ready.go:94] pod "kube-controller-manager-embed-certs-832734" is "Ready"
	I1219 03:36:54.068878   51386 pod_ready.go:86] duration metric: took 380.74628ms for pod "kube-controller-manager-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.268799   51386 pod_ready.go:83] waiting for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.668935   51386 pod_ready.go:94] pod "kube-proxy-j49gn" is "Ready"
	I1219 03:36:54.668971   51386 pod_ready.go:86] duration metric: took 400.137967ms for pod "kube-proxy-j49gn" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:54.868862   51386 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269481   51386 pod_ready.go:94] pod "kube-scheduler-embed-certs-832734" is "Ready"
	I1219 03:36:55.269512   51386 pod_ready.go:86] duration metric: took 400.62266ms for pod "kube-scheduler-embed-certs-832734" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:36:55.269530   51386 pod_ready.go:40] duration metric: took 1.60604049s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:36:55.329865   51386 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:36:55.331217   51386 out.go:179] * Done! kubectl is now configured to use "embed-certs-832734" cluster and "default" namespace by default
	I1219 03:36:54.658040   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:36:54.764830   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:36:54.764901   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.265628   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:55.765546   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.265137   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:36:56.294858   51711 api_server.go:72] duration metric: took 1.53003596s to wait for apiserver process to appear ...
	I1219 03:36:56.294894   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:36:56.294920   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:56.295516   51711 api_server.go:269] stopped: https://192.168.72.129:8444/healthz: Get "https://192.168.72.129:8444/healthz": dial tcp 192.168.72.129:8444: connect: connection refused
	I1219 03:36:56.795253   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.818365   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.818396   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:36:59.818426   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:36:59.867609   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1219 03:36:59.867642   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1219 03:37:00.295133   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.300691   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.300720   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:00.795111   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:00.825034   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:00.825068   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.295554   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.307047   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.307078   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:01.795401   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:01.800055   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:01.800091   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.295888   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.302103   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.302125   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:02.795818   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:02.802296   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1219 03:37:02.802326   51711 api_server.go:103] status: https://192.168.72.129:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[+]poststarthook/rbac/bootstrap-roles ok
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1219 03:37:03.296021   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:03.301661   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:03.310379   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:03.310412   51711 api_server.go:131] duration metric: took 7.01550899s to wait for apiserver health ...
	I1219 03:37:03.310425   51711 cni.go:84] Creating CNI manager for ""
	I1219 03:37:03.310437   51711 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:37:03.312477   51711 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:37:03.313819   51711 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:37:03.331177   51711 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:37:03.360466   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:03.365800   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:03.365852   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:37:03.365866   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:03.365876   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:03.365889   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1219 03:37:03.365896   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:03.365910   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:03.365918   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:03.365924   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:03.365935   51711 system_pods.go:74] duration metric: took 5.441032ms to wait for pod list to return data ...
	I1219 03:37:03.365944   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:03.369512   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:03.369539   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:03.369553   51711 node_conditions.go:105] duration metric: took 3.601059ms to run NodePressure ...
	I1219 03:37:03.369618   51711 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I1219 03:37:03.647329   51711 kubeadm.go:729] waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651092   51711 kubeadm.go:744] kubelet initialised
	I1219 03:37:03.651116   51711 kubeadm.go:745] duration metric: took 3.75629ms waiting for restarted kubelet to initialise ...
	I1219 03:37:03.651137   51711 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:37:03.667607   51711 ops.go:34] apiserver oom_adj: -16
	I1219 03:37:03.667629   51711 kubeadm.go:602] duration metric: took 11.857709737s to restartPrimaryControlPlane
	I1219 03:37:03.667638   51711 kubeadm.go:403] duration metric: took 11.927656699s to StartCluster
	I1219 03:37:03.667662   51711 settings.go:142] acquiring lock: {Name:mk7f7ba85357bfc9fca2e66b70b16d967ca355d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.667744   51711 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:37:03.669684   51711 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:37:03.669943   51711 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.72.129 Port:8444 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:37:03.670026   51711 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:37:03.670125   51711 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670141   51711 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:37:03.670153   51711 addons.go:70] Setting metrics-server=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670165   51711 addons.go:239] Setting addon metrics-server=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670174   51711 addons.go:248] addon metrics-server should already be in state true
	I1219 03:37:03.670145   51711 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-382606"
	I1219 03:37:03.670175   51711 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670219   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.670222   51711 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-382606"
	I1219 03:37:03.670185   51711 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-382606"
	I1219 03:37:03.670315   51711 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.670328   51711 addons.go:248] addon dashboard should already be in state true
	I1219 03:37:03.670352   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	W1219 03:37:03.670200   51711 addons.go:248] addon storage-provisioner should already be in state true
	I1219 03:37:03.670428   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.671212   51711 out.go:179] * Verifying Kubernetes components...
	I1219 03:37:03.672712   51711 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:37:03.673624   51711 addons.go:436] installing /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:03.673642   51711 ssh_runner.go:362] scp dashboard/dashboard-admin.yaml --> /etc/kubernetes/addons/dashboard-admin.yaml (373 bytes)
	I1219 03:37:03.674241   51711 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I1219 03:37:03.674256   51711 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:37:03.674842   51711 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-382606"
	W1219 03:37:03.674857   51711 addons.go:248] addon default-storageclass should already be in state true
	I1219 03:37:03.674871   51711 host.go:66] Checking if "default-k8s-diff-port-382606" exists ...
	I1219 03:37:03.675431   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1219 03:37:03.675448   51711 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1219 03:37:03.675481   51711 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:03.675502   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:37:03.677064   51711 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:03.677081   51711 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:37:03.677620   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678481   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.678567   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.678872   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.680203   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680419   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.680904   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.680934   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681162   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681407   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681444   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.681467   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681685   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.681950   51711 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:fb:a4:4e", ip: ""} in network mk-default-k8s-diff-port-382606: {Iface:virbr4 ExpiryTime:2025-12-19 04:36:43 +0000 UTC Type:0 Mac:52:54:00:fb:a4:4e Iaid: IPaddr:192.168.72.129 Prefix:24 Hostname:default-k8s-diff-port-382606 Clientid:01:52:54:00:fb:a4:4e}
	I1219 03:37:03.681982   51711 main.go:144] libmachine: domain default-k8s-diff-port-382606 has defined IP address 192.168.72.129 and MAC address 52:54:00:fb:a4:4e in network mk-default-k8s-diff-port-382606
	I1219 03:37:03.682175   51711 sshutil.go:53] new ssh client: &{IP:192.168.72.129 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/default-k8s-diff-port-382606/id_rsa Username:docker}
	I1219 03:37:03.929043   51711 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:37:03.969693   51711 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:04.174684   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:37:04.182529   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:37:04.184635   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1219 03:37:04.184660   51711 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I1219 03:37:04.197532   51711 ssh_runner.go:195] Run: test -f /usr/bin/helm
	I1219 03:37:04.242429   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1219 03:37:04.242455   51711 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1219 03:37:04.309574   51711 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:04.309600   51711 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1219 03:37:04.367754   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1219 03:37:05.660040   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.485300577s)
	I1219 03:37:05.660070   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.477513606s)
	I1219 03:37:05.660116   51711 ssh_runner.go:235] Completed: test -f /usr/bin/helm: (1.462552784s)
	I1219 03:37:05.660185   51711 ssh_runner.go:195] Run: test -f /usr/local/bin/helm
	I1219 03:37:05.673056   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.305263658s)
	I1219 03:37:05.673098   51711 addons.go:500] Verifying addon metrics-server=true in "default-k8s-diff-port-382606"
	I1219 03:37:05.673137   51711 ssh_runner.go:195] Run: sudo bash -c "curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 && chmod 700 get_helm.sh && HELM_INSTALL_DIR=/usr/bin ./get_helm.sh"
	W1219 03:37:05.974619   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:06.630759   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort
	W1219 03:37:08.472974   51711 node_ready.go:57] node "default-k8s-diff-port-382606" has "Ready":"False" status (will retry)
	I1219 03:37:10.195765   51711 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig helm upgrade --install kubernetes-dashboard kubernetes-dashboard --create-namespace --repo https://kubernetes.github.io/dashboard/ --namespace kubernetes-dashboard --set nginx.enabled=false --set cert-manager.enabled=false --set metrics-server.enabled=false --set kong.proxy.type=NodePort: (3.56493028s)
	I1219 03:37:10.195868   51711 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.3/kubectl apply -f /etc/kubernetes/addons/dashboard-admin.yaml
	I1219 03:37:10.536948   51711 node_ready.go:49] node "default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:10.536984   51711 node_ready.go:38] duration metric: took 6.567254454s for node "default-k8s-diff-port-382606" to be "Ready" ...
	I1219 03:37:10.536999   51711 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:37:10.537074   51711 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:37:10.631962   51711 api_server.go:72] duration metric: took 6.961979571s to wait for apiserver process to appear ...
	I1219 03:37:10.631998   51711 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:37:10.632041   51711 api_server.go:253] Checking apiserver healthz at https://192.168.72.129:8444/healthz ...
	I1219 03:37:10.633102   51711 addons.go:500] Verifying addon dashboard=true in "default-k8s-diff-port-382606"
	I1219 03:37:10.637827   51711 out.go:179] * Verifying dashboard addon...
	I1219 03:37:10.641108   51711 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=kubernetes-dashboard-web" in ns "kubernetes-dashboard" ...
	I1219 03:37:10.648897   51711 api_server.go:279] https://192.168.72.129:8444/healthz returned 200:
	ok
	I1219 03:37:10.650072   51711 api_server.go:141] control plane version: v1.34.3
	I1219 03:37:10.650099   51711 api_server.go:131] duration metric: took 18.093601ms to wait for apiserver health ...
	I1219 03:37:10.650110   51711 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:37:10.655610   51711 kapi.go:86] Found 1 Pods for label selector app.kubernetes.io/name=kubernetes-dashboard-web
	I1219 03:37:10.655627   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:10.657971   51711 system_pods.go:59] 8 kube-system pods found
	I1219 03:37:10.657998   51711 system_pods.go:61] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.658023   51711 system_pods.go:61] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.658033   51711 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.658042   51711 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.658048   51711 system_pods.go:61] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.658055   51711 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.658064   51711 system_pods.go:61] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.658069   51711 system_pods.go:61] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.658080   51711 system_pods.go:74] duration metric: took 7.963499ms to wait for pod list to return data ...
	I1219 03:37:10.658089   51711 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:37:10.668090   51711 default_sa.go:45] found service account: "default"
	I1219 03:37:10.668118   51711 default_sa.go:55] duration metric: took 10.020956ms for default service account to be created ...
	I1219 03:37:10.668130   51711 system_pods.go:116] waiting for k8s-apps to be running ...
	I1219 03:37:10.680469   51711 system_pods.go:86] 8 kube-system pods found
	I1219 03:37:10.680493   51711 system_pods.go:89] "coredns-66bc5c9577-bzq6s" [3e588983-8f37-472c-8234-e7dd2e1a6a4a] Running
	I1219 03:37:10.680507   51711 system_pods.go:89] "etcd-default-k8s-diff-port-382606" [e43d1512-8e19-4083-9f0f-9bbe3a1a3fdd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:37:10.680513   51711 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-382606" [6fa6e4cb-e27b-4009-b3ea-d9d9836e97cd] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:37:10.680520   51711 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-382606" [b56f31d0-4c4e-4cde-b993-c5d830b80e95] Running
	I1219 03:37:10.680525   51711 system_pods.go:89] "kube-proxy-vhml9" [8bec61eb-4ec4-4f3f-abf1-d471842e5929] Running
	I1219 03:37:10.680532   51711 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-382606" [ecc02e96-bdc5-4adf-b40c-e0acce9ca637] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:37:10.680540   51711 system_pods.go:89] "metrics-server-746fcd58dc-xphdl" [fb637b66-cb31-46cc-b490-110c2825cacc] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1219 03:37:10.680555   51711 system_pods.go:89] "storage-provisioner" [10e715ce-7edc-4af5-93e0-e975d561cdf3] Running
	I1219 03:37:10.680567   51711 system_pods.go:126] duration metric: took 12.428884ms to wait for k8s-apps to be running ...
	I1219 03:37:10.680577   51711 system_svc.go:44] waiting for kubelet service to be running ....
	I1219 03:37:10.680634   51711 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:37:10.723844   51711 system_svc.go:56] duration metric: took 43.258925ms WaitForService to wait for kubelet
	I1219 03:37:10.723871   51711 kubeadm.go:587] duration metric: took 7.05389644s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1219 03:37:10.723887   51711 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:37:10.731598   51711 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:37:10.731620   51711 node_conditions.go:123] node cpu capacity is 2
	I1219 03:37:10.731629   51711 node_conditions.go:105] duration metric: took 7.738835ms to run NodePressure ...
	I1219 03:37:10.731640   51711 start.go:242] waiting for startup goroutines ...
	I1219 03:37:11.145699   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:11.645111   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.144952   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:12.644987   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.151074   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:13.645695   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.146399   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:14.645725   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.146044   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:15.645372   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.145700   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:16.645126   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.145189   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:17.645089   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.151071   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:18.645879   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.145525   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:19.645572   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.144405   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:20.647145   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.145368   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:21.653732   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.146443   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:22.645800   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.145131   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:23.644929   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.145023   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:24.646072   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.145868   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:25.647994   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.147617   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:26.648227   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.149067   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:27.645432   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.145986   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:28.645392   51711 kapi.go:96] waiting for pod "app.kubernetes.io/name=kubernetes-dashboard-web", current state: Pending: [<nil>]
	I1219 03:37:29.149926   51711 kapi.go:107] duration metric: took 18.508817791s to wait for app.kubernetes.io/name=kubernetes-dashboard-web ...
	I1219 03:37:29.152664   51711 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-382606 addons enable metrics-server
	
	I1219 03:37:29.153867   51711 out.go:179] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I1219 03:37:29.155085   51711 addons.go:546] duration metric: took 25.485078365s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server dashboard]
	I1219 03:37:29.155131   51711 start.go:247] waiting for cluster config update ...
	I1219 03:37:29.155147   51711 start.go:256] writing updated cluster config ...
	I1219 03:37:29.156022   51711 ssh_runner.go:195] Run: rm -f paused
	I1219 03:37:29.170244   51711 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:29.178962   51711 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.186205   51711 pod_ready.go:94] pod "coredns-66bc5c9577-bzq6s" is "Ready"
	I1219 03:37:29.186234   51711 pod_ready.go:86] duration metric: took 7.24885ms for pod "coredns-66bc5c9577-bzq6s" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.280615   51711 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.286426   51711 pod_ready.go:94] pod "etcd-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.286446   51711 pod_ready.go:86] duration metric: took 5.805885ms for pod "etcd-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.288885   51711 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.293769   51711 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.293787   51711 pod_ready.go:86] duration metric: took 4.884445ms for pod "kube-apiserver-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.296432   51711 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.576349   51711 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:29.576388   51711 pod_ready.go:86] duration metric: took 279.933458ms for pod "kube-controller-manager-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:29.777084   51711 pod_ready.go:83] waiting for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.176016   51711 pod_ready.go:94] pod "kube-proxy-vhml9" is "Ready"
	I1219 03:37:30.176047   51711 pod_ready.go:86] duration metric: took 398.930848ms for pod "kube-proxy-vhml9" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.377206   51711 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776837   51711 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-382606" is "Ready"
	I1219 03:37:30.776861   51711 pod_ready.go:86] duration metric: took 399.600189ms for pod "kube-scheduler-default-k8s-diff-port-382606" in "kube-system" namespace to be "Ready" or be gone ...
	I1219 03:37:30.776872   51711 pod_ready.go:40] duration metric: took 1.606601039s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1219 03:37:30.827211   51711 start.go:625] kubectl: 1.35.0, cluster: 1.34.3 (minor skew: 1)
	I1219 03:37:30.828493   51711 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-382606" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	32e15240fa31d       6e38f40d628db       17 minutes ago      Running             storage-provisioner                    2                   fc9d52e71753c       storage-provisioner                                     kube-system
	2d520f6777674       d9cbc9f4053ca       17 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   9513325997612       kubernetes-dashboard-metrics-scraper-6b5c7dc479-5pgl4   kubernetes-dashboard
	566a8988155f0       a0607af4fcd8a       17 minutes ago      Running             kubernetes-dashboard-api               0                   c0abe52d6c6ca       kubernetes-dashboard-api-595fbdbc7-hzt4g                kubernetes-dashboard
	690cda17bf449       dd54374d0ab14       18 minutes ago      Running             kubernetes-dashboard-auth              0                   1bcee8cab018e       kubernetes-dashboard-auth-db88b997-gj8f6                kubernetes-dashboard
	d52a807299555       59f642f485d26       18 minutes ago      Running             kubernetes-dashboard-web               0                   fc3434f3f7ba7       kubernetes-dashboard-web-858bd7466-g2wg2                kubernetes-dashboard
	6f5166ce8bc44       3a975970da2f5       18 minutes ago      Running             proxy                                  0                   c8875f4a5d6d7       kubernetes-dashboard-kong-f487b85cd-qdxp2               kubernetes-dashboard
	8186b14c17a6b       3a975970da2f5       18 minutes ago      Exited              clear-stale-pid                        0                   c8875f4a5d6d7       kubernetes-dashboard-kong-f487b85cd-qdxp2               kubernetes-dashboard
	7fda8ec32cf13       ead0a4a53df89       18 minutes ago      Running             coredns                                1                   20a7fc075bd0c       coredns-5dd5756b68-k7zvn                                kube-system
	275ea642aa833       56cc512116c8f       18 minutes ago      Running             busybox                                1                   6646a9ecc362c       busybox                                                 default
	faa5402ab5f1d       6e38f40d628db       18 minutes ago      Exited              storage-provisioner                    1                   fc9d52e71753c       storage-provisioner                                     kube-system
	6afdb246cf175       ea1030da44aa1       18 minutes ago      Running             kube-proxy                             1                   2849645f72689       kube-proxy-r6bwr                                        kube-system
	a4dbb1c53b812       73deb9a3f7025       18 minutes ago      Running             etcd                                   1                   112d85f8fde58       etcd-old-k8s-version-638861                             kube-system
	a4a00d791c075       bb5e0dde9054c       18 minutes ago      Running             kube-apiserver                         1                   3f560778cf693       kube-apiserver-old-k8s-version-638861                   kube-system
	1507e32b71c8a       f6f496300a2ae       18 minutes ago      Running             kube-scheduler                         1                   94bdeee2e5e7d       kube-scheduler-old-k8s-version-638861                   kube-system
	975da5b753f18       4be79c38a4bab       18 minutes ago      Running             kube-controller-manager                1                   8fab925eeb22c       kube-controller-manager-old-k8s-version-638861          kube-system
	24824a169681e       56cc512116c8f       20 minutes ago      Exited              busybox                                0                   859af02e82bf5       busybox                                                 default
	d47d05341c2f9       ead0a4a53df89       21 minutes ago      Exited              coredns                                0                   6b2d1447a7785       coredns-5dd5756b68-k7zvn                                kube-system
	e5763ced197c9       ea1030da44aa1       21 minutes ago      Exited              kube-proxy                             0                   7e2fca6297d8b       kube-proxy-r6bwr                                        kube-system
	3cb460b28aa41       f6f496300a2ae       21 minutes ago      Exited              kube-scheduler                         0                   8cf7f9dd3da91       kube-scheduler-old-k8s-version-638861                   kube-system
	599c003858b08       73deb9a3f7025       21 minutes ago      Exited              etcd                                   0                   26c400e7c9080       etcd-old-k8s-version-638861                             kube-system
	14158cc611fbd       bb5e0dde9054c       21 minutes ago      Exited              kube-apiserver                         0                   e967b7f4adf13       kube-apiserver-old-k8s-version-638861                   kube-system
	ac0b2043b72f6       4be79c38a4bab       21 minutes ago      Exited              kube-controller-manager                0                   ac1cf0689249e       kube-controller-manager-old-k8s-version-638861          kube-system
	
	
	==> containerd <==
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.612414814Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod4abe0b08-38b3-47be-8b35-e371e18dd0f4/6f5166ce8bc44cac494ec06c2ee37b5e3aacb57bfdcef32deb6d4c2965410180/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.613470059Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod717b544a-6cb1-48ca-a26e-1bc94bcb2c3f/2d520f677767433bfa20bc5b35e0550c36fbc692b2b50339245aa19b39b6d1f6/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.614396763Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod392ecf98f6f3f2d486999d713279a0a8/975da5b753f1848982b3baa9f93ea7de92ce4b2a5abcbf196c20705bad18c267/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.615782786Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pode9ed91c694fe54412cab040f01555e9a/1507e32b71c8aba99c55cc5db0063bd12b16ed088ef31e17345b85fa936f3675/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.617311990Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podac9701989312c5fe54cdbb595c769cfa/a4dbb1c53b8122726334f86af00d8ac478bd098e855d230304cdac0be53a0e23/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.618442190Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod18c93c96-a5a7-4399-91e6-4a8e4ece1364/6afdb246cf175b86939d5a92ab5440220f2b51c26ea9c52137a9a3bdf281eb3b/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.620133079Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod7c5cb69b-fc76-4a6b-ac31-d7eb130fce30/690cda17bf449652a4a2ff13e235583f0f51760f993845afe19091eb7bcfcc3b/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.622319300Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podbe434df5-3dc3-46ee-afab-a94b9048072e/275ea642aa8335ee67952db483469728a7b4618659737f819176fb2d425ae4e6/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.624824365Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd5225a08-4f82-42ca-b33b-94119eea214d/566a8988155f0c73fe28a7611d3c1a996b9a9a85b5153a4dd94a4c634f8c4136/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.626234167Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod110b4cbc-e16d-4cd9-aaf8-7a4854204c6a/7fda8ec32cf13372ee428ed3e5ef4d415391b5e7629bd309e4076f358fb5b547/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.627194732Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod232e90c8-378e-4a62-8ccb-850d56e8acce/d52a807299555a94ad54a215d26ab0bffd13574ee93cb182b433b954ec256f20/hugetlb.2MB.events\""
	Dec 19 03:54:03 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:03.628988973Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podf14b3d00-4033-4d89-af48-44a049c36335/32e15240fa31df7bf6ce24ce3792bc601ae9273a3725283056e797deaf01c1f2/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.650317915Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod7c5cb69b-fc76-4a6b-ac31-d7eb130fce30/690cda17bf449652a4a2ff13e235583f0f51760f993845afe19091eb7bcfcc3b/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.651926071Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podbe434df5-3dc3-46ee-afab-a94b9048072e/275ea642aa8335ee67952db483469728a7b4618659737f819176fb2d425ae4e6/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.652988419Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd5225a08-4f82-42ca-b33b-94119eea214d/566a8988155f0c73fe28a7611d3c1a996b9a9a85b5153a4dd94a4c634f8c4136/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.654091997Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod110b4cbc-e16d-4cd9-aaf8-7a4854204c6a/7fda8ec32cf13372ee428ed3e5ef4d415391b5e7629bd309e4076f358fb5b547/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.655786836Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod232e90c8-378e-4a62-8ccb-850d56e8acce/d52a807299555a94ad54a215d26ab0bffd13574ee93cb182b433b954ec256f20/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.657002580Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podf14b3d00-4033-4d89-af48-44a049c36335/32e15240fa31df7bf6ce24ce3792bc601ae9273a3725283056e797deaf01c1f2/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.658328902Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod138dcbda4f97b7a2e9859168d1696321/a4a00d791c0757b381bb071135629d62efcd7b058175bb441c82dabdcc84b8ff/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.659824824Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod4abe0b08-38b3-47be-8b35-e371e18dd0f4/6f5166ce8bc44cac494ec06c2ee37b5e3aacb57bfdcef32deb6d4c2965410180/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.660796786Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod717b544a-6cb1-48ca-a26e-1bc94bcb2c3f/2d520f677767433bfa20bc5b35e0550c36fbc692b2b50339245aa19b39b6d1f6/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.661619336Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod392ecf98f6f3f2d486999d713279a0a8/975da5b753f1848982b3baa9f93ea7de92ce4b2a5abcbf196c20705bad18c267/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.662444312Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pode9ed91c694fe54412cab040f01555e9a/1507e32b71c8aba99c55cc5db0063bd12b16ed088ef31e17345b85fa936f3675/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.663510734Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podac9701989312c5fe54cdbb595c769cfa/a4dbb1c53b8122726334f86af00d8ac478bd098e855d230304cdac0be53a0e23/hugetlb.2MB.events\""
	Dec 19 03:54:13 old-k8s-version-638861 containerd[723]: time="2025-12-19T03:54:13.665163907Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod18c93c96-a5a7-4399-91e6-4a8e4ece1364/6afdb246cf175b86939d5a92ab5440220f2b51c26ea9c52137a9a3bdf281eb3b/hugetlb.2MB.events\""
	
	
	==> coredns [7fda8ec32cf13372ee428ed3e5ef4d415391b5e7629bd309e4076f358fb5b547] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:41531 - 54427 "HINFO IN 4055901202491664803.6192370230818033698. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019008068s
	
	
	==> coredns [d47d05341c2f9312312755e83708494ed9b6626dc49261ca6470871aad909790] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 7996ca7cabdb2fd3e37b0463c78d5a492f8d30690ee66a90ae7ff24c50d9d936a24d239b3a5946771521ff70c09a796ffaf6ef8abe5753fd1ad5af38b6cdbb7f
	[INFO] Reloading complete
	[INFO] 127.0.0.1:54015 - 10535 "HINFO IN 4001215073591724234.6939015659602496373. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.014938124s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-638861
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-638861
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=old-k8s-version-638861
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_32_47_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///run/containerd/containerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:32:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-638861
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:54:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:52:03 +0000   Fri, 19 Dec 2025 03:32:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:52:03 +0000   Fri, 19 Dec 2025 03:32:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:52:03 +0000   Fri, 19 Dec 2025 03:32:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:52:03 +0000   Fri, 19 Dec 2025 03:35:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.183
	  Hostname:    old-k8s-version-638861
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 4ec830c5d817463d857d71a1ab5fac56
	  System UUID:                4ec830c5-d817-463d-857d-71a1ab5fac56
	  Boot ID:                    21dff12a-5acb-466b-b20a-28df67d9021e
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-5dd5756b68-k7zvn                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-old-k8s-version-638861                              100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-old-k8s-version-638861                    250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-old-k8s-version-638861           200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-r6bwr                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-old-k8s-version-638861                    100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-57f55c9bc5-n4sjv                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-api-595fbdbc7-hzt4g                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-db88b997-gj8f6                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-f487b85cd-qdxp2                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-6b5c7dc479-5pgl4    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-web-858bd7466-g2wg2                 100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  21m                kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m                kubelet          Node old-k8s-version-638861 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m                kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientPID
	  Normal  NodeReady                21m                kubelet          Node old-k8s-version-638861 status is now: NodeReady
	  Normal  Starting                 21m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           21m                node-controller  Node old-k8s-version-638861 event: Registered Node old-k8s-version-638861 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node old-k8s-version-638861 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node old-k8s-version-638861 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node old-k8s-version-638861 event: Registered Node old-k8s-version-638861 in Controller
	
	
	==> dmesg <==
	[Dec19 03:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000006] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000047] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.002532] (rpcbind)[119]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.851035] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000017] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.103924] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.693625] kauditd_printk_skb: 208 callbacks suppressed
	[  +4.544134] kauditd_printk_skb: 272 callbacks suppressed
	[  +0.133853] kauditd_printk_skb: 41 callbacks suppressed
	[Dec19 03:36] kauditd_printk_skb: 150 callbacks suppressed
	[  +5.199700] kauditd_printk_skb: 28 callbacks suppressed
	[  +5.772464] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.344696] kauditd_printk_skb: 39 callbacks suppressed
	[  +7.678146] kauditd_printk_skb: 32 callbacks suppressed
	
	
	==> etcd [599c003858b08bf2788501dd83ece0914816ff86f6cb26fe31b48a4eef02f9c7] <==
	{"level":"info","ts":"2025-12-19T03:33:01.778677Z","caller":"traceutil/trace.go:171","msg":"trace[418481790] range","detail":"{range_begin:/registry/pods/kube-system/kube-proxy-r6bwr; range_end:; response_count:1; response_revision:319; }","duration":"216.146441ms","start":"2025-12-19T03:33:01.562524Z","end":"2025-12-19T03:33:01.778671Z","steps":["trace[418481790] 'agreement among raft nodes before linearized reading'  (duration: 216.086596ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:01.779772Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"186.760624ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/ttl-controller\" ","response":"range_response_count:1 size:193"}
	{"level":"info","ts":"2025-12-19T03:33:01.779812Z","caller":"traceutil/trace.go:171","msg":"trace[1476294962] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/ttl-controller; range_end:; response_count:1; response_revision:321; }","duration":"186.806663ms","start":"2025-12-19T03:33:01.592996Z","end":"2025-12-19T03:33:01.779802Z","steps":["trace[1476294962] 'agreement among raft nodes before linearized reading'  (duration: 186.708471ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:01.780068Z","caller":"traceutil/trace.go:171","msg":"trace[990989111] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"212.942526ms","start":"2025-12-19T03:33:01.567115Z","end":"2025-12-19T03:33:01.780058Z","steps":["trace[990989111] 'process raft request'  (duration: 212.472985ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:01.780222Z","caller":"traceutil/trace.go:171","msg":"trace[106583097] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"211.202561ms","start":"2025-12-19T03:33:01.569005Z","end":"2025-12-19T03:33:01.780207Z","steps":["trace[106583097] 'process raft request'  (duration: 210.663108ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:01.780499Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"137.568322ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslice-controller\" ","response":"range_response_count:1 size:214"}
	{"level":"info","ts":"2025-12-19T03:33:01.780526Z","caller":"traceutil/trace.go:171","msg":"trace[1705747924] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslice-controller; range_end:; response_count:1; response_revision:321; }","duration":"137.600838ms","start":"2025-12-19T03:33:01.642919Z","end":"2025-12-19T03:33:01.780519Z","steps":["trace[1705747924] 'agreement among raft nodes before linearized reading'  (duration: 137.550156ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:01.780629Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"166.138053ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" ","response":"range_response_count:1 size:171"}
	{"level":"info","ts":"2025-12-19T03:33:01.780648Z","caller":"traceutil/trace.go:171","msg":"trace[222383032] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:1; response_revision:321; }","duration":"166.165944ms","start":"2025-12-19T03:33:01.614476Z","end":"2025-12-19T03:33:01.780642Z","steps":["trace[222383032] 'agreement among raft nodes before linearized reading'  (duration: 166.11971ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:11.189241Z","caller":"traceutil/trace.go:171","msg":"trace[1604052810] transaction","detail":"{read_only:false; response_revision:396; number_of_response:1; }","duration":"171.589199ms","start":"2025-12-19T03:33:11.01763Z","end":"2025-12-19T03:33:11.189219Z","steps":["trace[1604052810] 'process raft request'  (duration: 171.468684ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:11.526126Z","caller":"traceutil/trace.go:171","msg":"trace[1366067802] linearizableReadLoop","detail":"{readStateIndex:411; appliedIndex:410; }","duration":"137.90963ms","start":"2025-12-19T03:33:11.388195Z","end":"2025-12-19T03:33:11.526104Z","steps":["trace[1366067802] 'read index received'  (duration: 119.926789ms)","trace[1366067802] 'applied index is now lower than readState.Index'  (duration: 17.981718ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:11.526269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.076643ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:33:11.526341Z","caller":"traceutil/trace.go:171","msg":"trace[1645059783] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:396; }","duration":"138.165896ms","start":"2025-12-19T03:33:11.388164Z","end":"2025-12-19T03:33:11.52633Z","steps":["trace[1645059783] 'agreement among raft nodes before linearized reading'  (duration: 138.045932ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:15.527883Z","caller":"traceutil/trace.go:171","msg":"trace[109695888] linearizableReadLoop","detail":"{readStateIndex:421; appliedIndex:420; }","duration":"140.347346ms","start":"2025-12-19T03:33:15.387457Z","end":"2025-12-19T03:33:15.527804Z","steps":["trace[109695888] 'read index received'  (duration: 140.13905ms)","trace[109695888] 'applied index is now lower than readState.Index'  (duration: 206.986µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:15.528019Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.567698ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:33:15.528078Z","caller":"traceutil/trace.go:171","msg":"trace[1901639264] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:405; }","duration":"140.641232ms","start":"2025-12-19T03:33:15.387425Z","end":"2025-12-19T03:33:15.528067Z","steps":["trace[1901639264] 'agreement among raft nodes before linearized reading'  (duration: 140.544548ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:15.52835Z","caller":"traceutil/trace.go:171","msg":"trace[809415947] transaction","detail":"{read_only:false; response_revision:405; number_of_response:1; }","duration":"316.603947ms","start":"2025-12-19T03:33:15.21173Z","end":"2025-12-19T03:33:15.528334Z","steps":["trace[809415947] 'process raft request'  (duration: 315.918602ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:15.53019Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-12-19T03:33:15.211718Z","time spent":"316.719886ms","remote":"127.0.0.1:54632","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1107,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:399 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1034 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2025-12-19T03:33:15.96468Z","caller":"traceutil/trace.go:171","msg":"trace[1003698116] linearizableReadLoop","detail":"{readStateIndex:422; appliedIndex:421; }","duration":"262.22972ms","start":"2025-12-19T03:33:15.702429Z","end":"2025-12-19T03:33:15.964658Z","steps":["trace[1003698116] 'read index received'  (duration: 231.879953ms)","trace[1003698116] 'applied index is now lower than readState.Index'  (duration: 30.3488ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:15.964906Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.480731ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-5dd5756b68-k7zvn\" ","response":"range_response_count:1 size:4751"}
	{"level":"info","ts":"2025-12-19T03:33:15.964937Z","caller":"traceutil/trace.go:171","msg":"trace[1392165153] range","detail":"{range_begin:/registry/pods/kube-system/coredns-5dd5756b68-k7zvn; range_end:; response_count:1; response_revision:406; }","duration":"262.521705ms","start":"2025-12-19T03:33:15.702406Z","end":"2025-12-19T03:33:15.964928Z","steps":["trace[1392165153] 'agreement among raft nodes before linearized reading'  (duration: 262.360771ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:15.965114Z","caller":"traceutil/trace.go:171","msg":"trace[261807475] transaction","detail":"{read_only:false; response_revision:406; number_of_response:1; }","duration":"280.698225ms","start":"2025-12-19T03:33:15.684408Z","end":"2025-12-19T03:33:15.965106Z","steps":["trace[261807475] 'process raft request'  (duration: 249.859586ms)","trace[261807475] 'compare'  (duration: 30.272809ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:33:53.984686Z","caller":"traceutil/trace.go:171","msg":"trace[1754295826] transaction","detail":"{read_only:false; response_revision:452; number_of_response:1; }","duration":"245.190395ms","start":"2025-12-19T03:33:53.739476Z","end":"2025-12-19T03:33:53.984666Z","steps":["trace[1754295826] 'process raft request'  (duration: 244.997458ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:57.749258Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.067607ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/metrics-server\" ","response":"range_response_count:1 size:5183"}
	{"level":"info","ts":"2025-12-19T03:33:57.749706Z","caller":"traceutil/trace.go:171","msg":"trace[401862548] range","detail":"{range_begin:/registry/deployments/kube-system/metrics-server; range_end:; response_count:1; response_revision:486; }","duration":"170.574305ms","start":"2025-12-19T03:33:57.579117Z","end":"2025-12-19T03:33:57.749691Z","steps":["trace[401862548] 'range keys from in-memory index tree'  (duration: 169.881247ms)"],"step_count":1}
	
	
	==> etcd [a4dbb1c53b8122726334f86af00d8ac478bd098e855d230304cdac0be53a0e23] <==
	{"level":"info","ts":"2025-12-19T03:35:42.598543Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:35:42.599004Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-12-19T03:35:42.601137Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.183:2379"}
	{"level":"info","ts":"2025-12-19T03:35:42.601973Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-12-19T03:35:42.601997Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-12-19T03:35:42.602104Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-12-19T03:36:05.107336Z","caller":"traceutil/trace.go:171","msg":"trace[558099464] transaction","detail":"{read_only:false; response_revision:754; number_of_response:1; }","duration":"144.532222ms","start":"2025-12-19T03:36:04.962722Z","end":"2025-12-19T03:36:05.107254Z","steps":["trace[558099464] 'process raft request'  (duration: 144.395307ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:15.331617Z","caller":"traceutil/trace.go:171","msg":"trace[1694338832] transaction","detail":"{read_only:false; response_revision:775; number_of_response:1; }","duration":"147.204329ms","start":"2025-12-19T03:36:15.184384Z","end":"2025-12-19T03:36:15.331588Z","steps":["trace[1694338832] 'process raft request'  (duration: 147.074948ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:15.39219Z","caller":"traceutil/trace.go:171","msg":"trace[856957208] linearizableReadLoop","detail":"{readStateIndex:826; appliedIndex:824; }","duration":"136.528638ms","start":"2025-12-19T03:36:15.255641Z","end":"2025-12-19T03:36:15.392169Z","steps":["trace[856957208] 'read index received'  (duration: 75.729675ms)","trace[856957208] 'applied index is now lower than readState.Index'  (duration: 60.798284ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:36:15.393227Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"112.012099ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.61.183\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2025-12-19T03:36:15.393324Z","caller":"traceutil/trace.go:171","msg":"trace[47536450] range","detail":"{range_begin:/registry/masterleases/192.168.61.183; range_end:; response_count:1; response_revision:776; }","duration":"112.476946ms","start":"2025-12-19T03:36:15.280828Z","end":"2025-12-19T03:36:15.393305Z","steps":["trace[47536450] 'agreement among raft nodes before linearized reading'  (duration: 111.92194ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:15.39352Z","caller":"traceutil/trace.go:171","msg":"trace[1671633345] transaction","detail":"{read_only:false; response_revision:776; number_of_response:1; }","duration":"154.411351ms","start":"2025-12-19T03:36:15.239099Z","end":"2025-12-19T03:36:15.393511Z","steps":["trace[1671633345] 'process raft request'  (duration: 152.987524ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:15.393649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.030783ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:5 size:31949"}
	{"level":"info","ts":"2025-12-19T03:36:15.393667Z","caller":"traceutil/trace.go:171","msg":"trace[1123717966] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:5; response_revision:776; }","duration":"138.054614ms","start":"2025-12-19T03:36:15.255605Z","end":"2025-12-19T03:36:15.39366Z","steps":["trace[1123717966] 'agreement among raft nodes before linearized reading'  (duration: 137.970422ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:36.175138Z","caller":"traceutil/trace.go:171","msg":"trace[1482774329] transaction","detail":"{read_only:false; response_revision:821; number_of_response:1; }","duration":"139.367478ms","start":"2025-12-19T03:36:36.035703Z","end":"2025-12-19T03:36:36.175071Z","steps":["trace[1482774329] 'process raft request'  (duration: 139.113844ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:36.500424Z","caller":"traceutil/trace.go:171","msg":"trace[261135074] transaction","detail":"{read_only:false; response_revision:823; number_of_response:1; }","duration":"129.321931ms","start":"2025-12-19T03:36:36.371081Z","end":"2025-12-19T03:36:36.500403Z","steps":["trace[261135074] 'process raft request'  (duration: 127.882661ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:45.967671Z","caller":"traceutil/trace.go:171","msg":"trace[1486780956] transaction","detail":"{read_only:false; response_revision:827; number_of_response:1; }","duration":"103.984761ms","start":"2025-12-19T03:36:45.863666Z","end":"2025-12-19T03:36:45.967651Z","steps":["trace[1486780956] 'process raft request'  (duration: 103.828387ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:46.113011Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"122.186658ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/ingressclasses/\" range_end:\"/registry/ingressclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:36:46.113134Z","caller":"traceutil/trace.go:171","msg":"trace[1300966544] range","detail":"{range_begin:/registry/ingressclasses/; range_end:/registry/ingressclasses0; response_count:0; response_revision:827; }","duration":"122.329587ms","start":"2025-12-19T03:36:45.990771Z","end":"2025-12-19T03:36:46.113101Z","steps":["trace[1300966544] 'count revisions from in-memory index tree'  (duration: 121.884937ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:45:42.642715Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1037}
	{"level":"info","ts":"2025-12-19T03:45:42.665579Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1037,"took":"22.337685ms","hash":4168797594}
	{"level":"info","ts":"2025-12-19T03:45:42.665626Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4168797594,"revision":1037,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:50:42.650309Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1280}
	{"level":"info","ts":"2025-12-19T03:50:42.653279Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1280,"took":"2.030881ms","hash":3732258807}
	{"level":"info","ts":"2025-12-19T03:50:42.653409Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3732258807,"revision":1280,"compact-revision":1037}
	
	
	==> kernel <==
	 03:54:21 up 18 min,  0 users,  load average: 0.41, 0.27, 0.20
	Linux old-k8s-version-638861 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [14158cc611fbd1a4ea8dd6a4977864f3368ec8909e3f0f4fee3b20942931d770] <==
	E1219 03:33:57.287969       1 controller.go:135] adding "v1beta1.metrics.k8s.io" to AggregationController failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:33:57.289508       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:33:57.289543       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:33:57.297314       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:57.297374       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1219 03:33:57.297411       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:33:57.297438       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 service unavailable
	I1219 03:33:57.297445       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:33:57.473248       1 alloc.go:330] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.102.77.221"}
	W1219 03:33:57.496616       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:57.498960       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E1219 03:33:57.509080       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io "v1beta1.metrics.k8s.io": the object has been modified; please apply your changes to the latest version and try again
	W1219 03:33:57.515235       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:57.517602       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W1219 03:33:58.288458       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:58.288545       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:33:58.288556       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:33:58.288727       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:33:58.288744       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:33:58.289713       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [a4a00d791c0757b381bb071135629d62efcd7b058175bb441c82dabdcc84b8ff] <==
	I1219 03:50:45.190533       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:50:45.190551       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:50:45.190776       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:50:45.191693       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:51:44.045099       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.77.221:443: connect: connection refused
	I1219 03:51:44.045154       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:51:45.191182       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:51:45.191241       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:51:45.191255       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:51:45.192366       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:51:45.192536       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:51:45.192630       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I1219 03:52:44.046156       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.77.221:443: connect: connection refused
	I1219 03:52:44.046245       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I1219 03:53:44.045769       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.102.77.221:443: connect: connection refused
	I1219 03:53:44.045821       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:53:45.192474       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:53:45.192595       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I1219 03:53:45.192609       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:53:45.192723       1 handler_proxy.go:93] no RequestInfo found in the context
	E1219 03:53:45.192841       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I1219 03:53:45.194682       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [975da5b753f1848982b3baa9f93ea7de92ce4b2a5abcbf196c20705bad18c267] <==
	I1219 03:48:27.897739       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:48:57.563198       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:48:57.907807       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:49:27.573833       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:49:27.917055       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:49:57.583200       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:49:57.926587       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:50:27.590165       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:50:27.937347       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:50:57.596510       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:50:57.946394       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:51:27.606166       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:51:27.957090       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:51:57.612102       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:51:57.969563       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:52:13.374220       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="559.311µs"
	E1219 03:52:27.619453       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:52:27.980999       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I1219 03:52:28.371438       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="337.338µs"
	E1219 03:52:57.625126       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:52:57.990940       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:53:27.634666       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:53:28.001037       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E1219 03:53:57.641731       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I1219 03:53:58.012147       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-controller-manager [ac0b2043b72f60c8bde3367de31e4b84f564861a647a19c0a8547ccdd0e4a432] <==
	I1219 03:33:01.973742       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-k7zvn"
	I1219 03:33:02.070075       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="1.257851833s"
	I1219 03:33:02.231281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="161.147283ms"
	I1219 03:33:02.231396       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="66.297µs"
	I1219 03:33:02.278520       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="81.929µs"
	I1219 03:33:02.319035       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="233.811µs"
	I1219 03:33:03.849420       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="100.999µs"
	I1219 03:33:03.955463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="142.279µs"
	I1219 03:33:04.221462       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1219 03:33:04.258572       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-hslxw"
	I1219 03:33:04.282166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="60.958564ms"
	I1219 03:33:04.296476       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.576394ms"
	I1219 03:33:04.298270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="208.722µs"
	I1219 03:33:13.751329       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="61.194µs"
	I1219 03:33:13.883973       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="259.926µs"
	I1219 03:33:13.905278       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="197.751µs"
	I1219 03:33:13.912121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="68.311µs"
	I1219 03:33:42.387536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="18.298125ms"
	I1219 03:33:42.388156       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="552.951µs"
	I1219 03:33:57.316954       1 event.go:307] "Event occurred" object="kube-system/metrics-server" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-57f55c9bc5 to 1"
	I1219 03:33:57.346643       1 event.go:307] "Event occurred" object="kube-system/metrics-server-57f55c9bc5" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-57f55c9bc5-n4sjv"
	I1219 03:33:57.364040       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="48.069258ms"
	I1219 03:33:57.395079       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="30.97649ms"
	I1219 03:33:57.396892       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="83.618µs"
	I1219 03:33:57.401790       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="77.188µs"
	
	
	==> kube-proxy [6afdb246cf175b86939d5a92ab5440220f2b51c26ea9c52137a9a3bdf281eb3b] <==
	I1219 03:35:45.309114       1 server_others.go:69] "Using iptables proxy"
	I1219 03:35:45.329188       1 node.go:141] Successfully retrieved node IP: 192.168.61.183
	I1219 03:35:45.389287       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1219 03:35:45.389431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:35:45.392841       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:35:45.393529       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:35:45.394846       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:35:45.395324       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:45.398839       1 config.go:188] "Starting service config controller"
	I1219 03:35:45.400252       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:35:45.399156       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:35:45.400554       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:35:45.403351       1 config.go:315] "Starting node config controller"
	I1219 03:35:45.404475       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:35:45.501205       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:35:45.501276       1 shared_informer.go:318] Caches are synced for service config
	I1219 03:35:45.505070       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-proxy [e5763ced197c990a55ef49ee57c3d0117c14f750bfcbb78eeadf20d2e1ce8b21] <==
	I1219 03:33:03.435881       1 server_others.go:69] "Using iptables proxy"
	I1219 03:33:03.449363       1 node.go:141] Successfully retrieved node IP: 192.168.61.183
	I1219 03:33:03.539580       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I1219 03:33:03.539621       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:33:03.542379       1 server_others.go:152] "Using iptables Proxier"
	I1219 03:33:03.542440       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1219 03:33:03.542794       1 server.go:846] "Version info" version="v1.28.0"
	I1219 03:33:03.543208       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:33:03.544347       1 config.go:188] "Starting service config controller"
	I1219 03:33:03.544394       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1219 03:33:03.544518       1 config.go:97] "Starting endpoint slice config controller"
	I1219 03:33:03.544528       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1219 03:33:03.547122       1 config.go:315] "Starting node config controller"
	I1219 03:33:03.547154       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1219 03:33:03.645077       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1219 03:33:03.645136       1 shared_informer.go:318] Caches are synced for service config
	I1219 03:33:03.647364       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [1507e32b71c8aba99c55cc5db0063bd12b16ed088ef31e17345b85fa936f3675] <==
	I1219 03:35:41.534723       1 serving.go:348] Generated self-signed cert in-memory
	W1219 03:35:44.111617       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:35:44.111664       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:35:44.111675       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:35:44.111704       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:35:44.181530       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1219 03:35:44.181573       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:44.186347       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:35:44.187333       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1219 03:35:44.188371       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1219 03:35:44.189958       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1219 03:35:44.290438       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [3cb460b28aa411920e98df6adcd7b37a2bc80e2092bf8f1a14621f8c687e104c] <==
	W1219 03:32:44.502102       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1219 03:32:44.502604       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1219 03:32:44.502162       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1219 03:32:44.502652       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1219 03:32:44.502212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:44.502691       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:44.502281       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:32:44.503051       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:32:44.498124       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:44.503064       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.317590       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1219 03:32:45.317641       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1219 03:32:45.330568       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:45.330617       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.433621       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:45.433784       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.519107       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1219 03:32:45.519151       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1219 03:32:45.572045       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1219 03:32:45.572183       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1219 03:32:45.586405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1219 03:32:45.586685       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W1219 03:32:45.642160       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1219 03:32:45.642188       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1219 03:32:48.885958       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 19 03:51:48 old-k8s-version-638861 kubelet[1087]: E1219 03:51:48.352665    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:51:59 old-k8s-version-638861 kubelet[1087]: E1219 03:51:59.361726    1087 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:51:59 old-k8s-version-638861 kubelet[1087]: E1219 03:51:59.361792    1087 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:51:59 old-k8s-version-638861 kubelet[1087]: E1219 03:51:59.362038    1087 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-rxfbd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Prob
e{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePoli
cy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-n4sjv_kube-system(1f99367a-e0cc-4e5c-a25c-eefdc505b84b): ErrImagePull: failed to pull and unpack image "fake.domain/registry.k8s.io/echoserver:1.4": failed to resolve reference "fake.domain/registry.k8s.io/echoserver:1.4": failed to do request: Head "https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4": dial tcp: lookup fake.domain: no such host
	Dec 19 03:51:59 old-k8s-version-638861 kubelet[1087]: E1219 03:51:59.362078    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:52:13 old-k8s-version-638861 kubelet[1087]: E1219 03:52:13.352533    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:52:28 old-k8s-version-638861 kubelet[1087]: E1219 03:52:28.353924    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:52:39 old-k8s-version-638861 kubelet[1087]: E1219 03:52:39.385218    1087 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 03:52:39 old-k8s-version-638861 kubelet[1087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 03:52:39 old-k8s-version-638861 kubelet[1087]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 03:52:39 old-k8s-version-638861 kubelet[1087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 03:52:39 old-k8s-version-638861 kubelet[1087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 03:52:40 old-k8s-version-638861 kubelet[1087]: E1219 03:52:40.352829    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:52:53 old-k8s-version-638861 kubelet[1087]: E1219 03:52:53.353398    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:53:06 old-k8s-version-638861 kubelet[1087]: E1219 03:53:06.352030    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:53:18 old-k8s-version-638861 kubelet[1087]: E1219 03:53:18.352786    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:53:32 old-k8s-version-638861 kubelet[1087]: E1219 03:53:32.352198    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:53:39 old-k8s-version-638861 kubelet[1087]: E1219 03:53:39.385088    1087 iptables.go:575] "Could not set up iptables canary" err=<
	Dec 19 03:53:39 old-k8s-version-638861 kubelet[1087]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Dec 19 03:53:39 old-k8s-version-638861 kubelet[1087]:         ip6tables v1.8.11 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Dec 19 03:53:39 old-k8s-version-638861 kubelet[1087]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Dec 19 03:53:39 old-k8s-version-638861 kubelet[1087]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Dec 19 03:53:46 old-k8s-version-638861 kubelet[1087]: E1219 03:53:46.351637    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:53:57 old-k8s-version-638861 kubelet[1087]: E1219 03:53:57.352765    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	Dec 19 03:54:10 old-k8s-version-638861 kubelet[1087]: E1219 03:54:10.352379    1087 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-n4sjv" podUID="1f99367a-e0cc-4e5c-a25c-eefdc505b84b"
	
	
	==> kubernetes-dashboard [2d520f677767433bfa20bc5b35e0550c36fbc692b2b50339245aa19b39b6d1f6] <==
	10.244.0.1 - - [19/Dec/2025:03:51:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:51:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:51:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:51:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:51:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:52:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:52:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:52:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:52:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:52:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:52:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:52:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:52:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:53:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:53:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:53:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:53:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:53:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:53:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:54:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	10.244.0.1 - - [19/Dec/2025:03:54:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.28"
	E1219 03:52:25.641959       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:53:25.642714       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [566a8988155f0c73fe28a7611d3c1a996b9a9a85b5153a4dd94a4c634f8c4136] <==
	I1219 03:36:22.141036       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:36:22.141152       1 init.go:49] Using in-cluster config
	I1219 03:36:22.142025       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:36:22.142241       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:36:22.142421       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:36:22.142431       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:36:22.151573       1 main.go:119] "Successful initial request to the apiserver" version="v1.28.0"
	I1219 03:36:22.151605       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:36:22.229421       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:36:22.230314       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:36:52.238932       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [690cda17bf449652a4a2ff13e235583f0f51760f993845afe19091eb7bcfcc3b] <==
	I1219 03:36:18.393815       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:36:18.393977       1 init.go:49] Using in-cluster config
	I1219 03:36:18.394200       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [d52a807299555a94ad54a215d26ab0bffd13574ee93cb182b433b954ec256f20] <==
	I1219 03:36:14.840301       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:36:14.840566       1 init.go:48] Using in-cluster config
	I1219 03:36:14.841157       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [32e15240fa31df7bf6ce24ce3792bc601ae9273a3725283056e797deaf01c1f2] <==
	I1219 03:36:29.719731       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1219 03:36:29.736835       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1219 03:36:29.737971       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1219 03:36:47.192064       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1219 03:36:47.193094       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-638861_e5fc271f-70ff-4288-add8-55a85b334ed9!
	I1219 03:36:47.196942       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"b1d6a796-c385-48cc-9e8d-36b9927d5f1f", APIVersion:"v1", ResourceVersion:"829", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-638861_e5fc271f-70ff-4288-add8-55a85b334ed9 became leader
	I1219 03:36:47.293584       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-638861_e5fc271f-70ff-4288-add8-55a85b334ed9!
	
	
	==> storage-provisioner [faa5402ab5f1dae317489202cbd7a47a83c8b119d88b43f090850212d610b624] <==
	I1219 03:35:45.081955       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:36:15.118032       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-638861 -n old-k8s-version-638861
helpers_test.go:270: (dbg) Run:  kubectl --context old-k8s-version-638861 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-57f55c9bc5-n4sjv
helpers_test.go:283: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context old-k8s-version-638861 describe pod metrics-server-57f55c9bc5-n4sjv
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context old-k8s-version-638861 describe pod metrics-server-57f55c9bc5-n4sjv: exit status 1 (64.481983ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-n4sjv" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context old-k8s-version-638861 describe pod metrics-server-57f55c9bc5-n4sjv: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (542.71s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:45:34.379983    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-728806 -n no-preload-728806
start_stop_delete_test.go:285: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:54:27.705070934 +0000 UTC m=+5354.519699760
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-728806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-728806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (64.661905ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-728806 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-728806 -n no-preload-728806
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-728806 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p no-preload-728806 logs -n 25: (1.769957775s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────
────────────────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────
────────────────┤
	│ delete  │ -p bridge-694633                                                                                                                                                                                                                                       │ bridge-694633                │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ delete  │ -p disable-driver-mounts-477416                                                                                                                                                                                                                        │ disable-driver-mounts-477416 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:34 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                           │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:33 UTC │
	│ stop    │ -p old-k8s-version-638861 --alsologtostderr -v=3                                                                                                                                                                                                       │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:33 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p no-preload-728806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p no-preload-728806 --alsologtostderr -v=3                                                                                                                                                                                                            │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                               │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p embed-certs-832734 --alsologtostderr -v=3                                                                                                                                                                                                           │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ stop    │ -p default-k8s-diff-port-382606 --alsologtostderr -v=3                                                                                                                                                                                                 │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	│ image   │ old-k8s-version-638861 image list --format=json                                                                                                                                                                                                        │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ pause   │ -p old-k8s-version-638861 --alsologtostderr -v=1                                                                                                                                                                                                       │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ unpause │ -p old-k8s-version-638861 --alsologtostderr -v=1                                                                                                                                                                                                       │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p old-k8s-version-638861                                                                                                                                                                                                                              │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p old-k8s-version-638861                                                                                                                                                                                                                              │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p newest-cni-979595 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-979595            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────
────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:27.192284   55963 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:27.192554   55963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:27.192564   55963 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:27.192569   55963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:27.192814   55963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:54:27.193330   55963 out.go:368] Setting JSON to false
	I1219 03:54:27.194272   55963 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5806,"bootTime":1766110661,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:27.194323   55963 start.go:143] virtualization: kvm guest
	I1219 03:54:27.196187   55963 out.go:179] * [newest-cni-979595] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:27.197745   55963 notify.go:221] Checking for updates...
	I1219 03:54:27.197758   55963 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:27.198924   55963 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:27.200214   55963 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:54:27.201254   55963 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:54:27.202292   55963 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:27.203305   55963 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:27.205091   55963 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:54:27.205251   55963 config.go:182] Loaded profile config "embed-certs-832734": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:54:27.205388   55963 config.go:182] Loaded profile config "guest-269272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1219 03:54:27.205524   55963 config.go:182] Loaded profile config "no-preload-728806": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:54:27.205679   55963 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:27.243710   55963 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 03:54:27.244920   55963 start.go:309] selected driver: kvm2
	I1219 03:54:27.244948   55963 start.go:928] validating driver "kvm2" against <nil>
	I1219 03:54:27.244979   55963 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:27.245942   55963 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:54:27.245993   55963 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:54:27.246255   55963 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:54:27.246287   55963 cni.go:84] Creating CNI manager for ""
	I1219 03:54:27.246341   55963 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:54:27.246351   55963 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 03:54:27.246403   55963 start.go:353] cluster config:
	{Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:27.246526   55963 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:27.247860   55963 out.go:179] * Starting "newest-cni-979595" primary control-plane node in "newest-cni-979595" cluster
	I1219 03:54:27.248846   55963 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:54:27.248875   55963 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	I1219 03:54:27.248883   55963 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:27.248960   55963 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:27.248973   55963 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1219 03:54:27.249080   55963 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json ...
	I1219 03:54:27.249100   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json: {Name:mk44e3bf87006423b68d2f8f5d5aa41ebe28e61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:27.249264   55963 start.go:360] acquireMachinesLock for newest-cni-979595: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:27.249299   55963 start.go:364] duration metric: took 18.947µs to acquireMachinesLock for "newest-cni-979595"
	I1219 03:54:27.249322   55963 start.go:93] Provisioning new machine with config: &{Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:54:27.249405   55963 start.go:125] createHost starting for "" (driver="kvm2")
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	944453a82c5aa       6e38f40d628db       17 minutes ago      Running             storage-provisioner                    2                   ed7a60a20e2dc       storage-provisioner                                     kube-system
	57718e1249a5f       3a975970da2f5       17 minutes ago      Running             proxy                                  0                   5a6d18a9b02b5       kubernetes-dashboard-kong-78b7499b45-k5gpr              kubernetes-dashboard
	a1d949e23b7c7       3a975970da2f5       17 minutes ago      Exited              clear-stale-pid                        0                   5a6d18a9b02b5       kubernetes-dashboard-kong-78b7499b45-k5gpr              kubernetes-dashboard
	7b0a529acf54e       a0607af4fcd8a       18 minutes ago      Running             kubernetes-dashboard-api               0                   a9292ca47ec35       kubernetes-dashboard-api-68f55bc586-nnhm4               kubernetes-dashboard
	f43ad99d90fdc       59f642f485d26       18 minutes ago      Running             kubernetes-dashboard-web               0                   3265cd33f89ee       kubernetes-dashboard-web-7f7574785f-kl9q5               kubernetes-dashboard
	47bce22eb1c3d       d9cbc9f4053ca       18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   ec4c4393f5099       kubernetes-dashboard-metrics-scraper-594bbfb84b-m7bxg   kubernetes-dashboard
	4ab0badeeeac9       dd54374d0ab14       18 minutes ago      Running             kubernetes-dashboard-auth              0                   7882aad4ead25       kubernetes-dashboard-auth-7f98f4d65c-88rd2              kubernetes-dashboard
	e62eb42de605a       aa5e3ebc0dfed       18 minutes ago      Running             coredns                                1                   a7280df0fffea       coredns-7d764666f9-x9688                                kube-system
	6b66d3ad8ceef       56cc512116c8f       18 minutes ago      Running             busybox                                1                   1a50d540f4edd       busybox                                                 default
	18e278297341f       af0321f3a4f38       18 minutes ago      Running             kube-proxy                             1                   68823e5c5e44c       kube-proxy-9kmrc                                        kube-system
	974c9df3fdb6e       6e38f40d628db       18 minutes ago      Exited              storage-provisioner                    1                   ed7a60a20e2dc       storage-provisioner                                     kube-system
	d4b2c0b372751       5032a56602e1b       18 minutes ago      Running             kube-controller-manager                1                   70dc97ffa012f       kube-controller-manager-no-preload-728806               kube-system
	69e38503d0f4a       0a108f7189562       18 minutes ago      Running             etcd                                   1                   ea01907d2d0ff       etcd-no-preload-728806                                  kube-system
	31fe13610d626       73f80cdc073da       18 minutes ago      Running             kube-scheduler                         1                   d4d20e11ed4b0       kube-scheduler-no-preload-728806                        kube-system
	8e9475c78e5b9       58865405a13bc       18 minutes ago      Running             kube-apiserver                         1                   45995925bd2de       kube-apiserver-no-preload-728806                        kube-system
	2eaa04351e239       56cc512116c8f       20 minutes ago      Exited              busybox                                0                   db288691afa59       busybox                                                 default
	0daf3a0d964c7       aa5e3ebc0dfed       21 minutes ago      Exited              coredns                                0                   bdbf4bbb83a3b       coredns-7d764666f9-x9688                                kube-system
	ba269e021c7b5       af0321f3a4f38       21 minutes ago      Exited              kube-proxy                             0                   f6b0e3b0f100e       kube-proxy-9kmrc                                        kube-system
	9e837e7d646d3       5032a56602e1b       21 minutes ago      Exited              kube-controller-manager                0                   82aa7824bae05       kube-controller-manager-no-preload-728806               kube-system
	e95ce55118a31       73f80cdc073da       21 minutes ago      Exited              kube-scheduler                         0                   92a7b35073023       kube-scheduler-no-preload-728806                        kube-system
	d76db54c93b48       0a108f7189562       21 minutes ago      Exited              etcd                                   0                   937b817e0c237       etcd-no-preload-728806                                  kube-system
	78fce5e539795       58865405a13bc       21 minutes ago      Exited              kube-apiserver                         0                   b28b150383205       kube-apiserver-no-preload-728806                        kube-system
	
	
	==> containerd <==
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.852816126Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podc626899b-06fd-4952-810c-a87343019170/18e278297341f2ef6879a8f2a0674670948d8c3f6abb39a02ef05bd379a52a3e/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.854036969Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf36e31fd-042e-433e-a7c5-a134902d3898/e62eb42de605a3fd4f852cc3a22bbb2ed2dde85681502e3e3010ca301d879b82/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.855175966Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod81b57837-5fb9-47e8-8129-e21af498d464/944453a82c5aa9d3c5e09a2a924586b9708bf3f99f7878d28d61b7e7c1fee4c8/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.856673641Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod80fc013f-db9d-4834-a88c-6730bc1e786e/4ab0badeeeac9066abd92e7322c2ebdf2519d10c318b6dbb9979651de9f47051/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.858176531Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod750cce99-cc88-428c-8026-6bf7cf14c959/47bce22eb1c3d05d42b992a32a07de7295a66c8221d28877512dd638f7211103/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.859743970Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd2315740-909c-4982-abd4-594425918b9d/f43ad99d90fdcbc40aff01641cbe5c36c47cb8ac50d0891cd9bb37a3ba555617/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.861089562Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod72cb839e-11d5-4529-a6e4-b296f089c405/7b0a529acf54eff15dbdb1f47e2ad6584d347a66e1e80bcb4d9664c4444610fd/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.862097229Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod5b9fe8fcc4d0408f4baa26793dc5e565/31fe13610d626eb1358e924d8da122c03230ae2e208af3021661987752f9fb4a/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.862991312Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod8d7c099790ad554c018d4a29ed4b2e09/d4b2c0b372751a1bc59d857101659a4a9bfe03f44e49264fc35f2b4bf8d1e6c2/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.864079829Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda5072acb-a3af-46a6-9b98-5b0623b96a12/57718e1249a5f35c50a73ad17a1694996de936bb2a37159c31cbfa1e94a0efc9/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.865367153Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2c9ed5ac1044fad5ac7c7ead9dca926a/8e9475c78e5b931b2d174b98c7f3095a1ee185519d00942f21d0ebf9092be1a0/hugetlb.2MB.events\""
	Dec 19 03:54:17 no-preload-728806 containerd[719]: time="2025-12-19T03:54:17.866784397Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8b96b916-9a7f-4d6d-9e31-aad0b0358a6b/6b66d3ad8ceefa6126aa63b7ab94cd560eddd093ff9d8d775639f4c6f9183d7e/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.891485928Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod425da6bb99dfb8cef077665118dd8f70/69e38503d0f4a7b85114416cbd244c14828460424d87ddf9dcec627e11f6d019/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.894719312Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podc626899b-06fd-4952-810c-a87343019170/18e278297341f2ef6879a8f2a0674670948d8c3f6abb39a02ef05bd379a52a3e/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.896509670Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf36e31fd-042e-433e-a7c5-a134902d3898/e62eb42de605a3fd4f852cc3a22bbb2ed2dde85681502e3e3010ca301d879b82/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.897949380Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod81b57837-5fb9-47e8-8129-e21af498d464/944453a82c5aa9d3c5e09a2a924586b9708bf3f99f7878d28d61b7e7c1fee4c8/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.899332218Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod80fc013f-db9d-4834-a88c-6730bc1e786e/4ab0badeeeac9066abd92e7322c2ebdf2519d10c318b6dbb9979651de9f47051/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.901401402Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod750cce99-cc88-428c-8026-6bf7cf14c959/47bce22eb1c3d05d42b992a32a07de7295a66c8221d28877512dd638f7211103/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.908081941Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd2315740-909c-4982-abd4-594425918b9d/f43ad99d90fdcbc40aff01641cbe5c36c47cb8ac50d0891cd9bb37a3ba555617/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.912155664Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod72cb839e-11d5-4529-a6e4-b296f089c405/7b0a529acf54eff15dbdb1f47e2ad6584d347a66e1e80bcb4d9664c4444610fd/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.915804645Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod5b9fe8fcc4d0408f4baa26793dc5e565/31fe13610d626eb1358e924d8da122c03230ae2e208af3021661987752f9fb4a/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.918060046Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod8d7c099790ad554c018d4a29ed4b2e09/d4b2c0b372751a1bc59d857101659a4a9bfe03f44e49264fc35f2b4bf8d1e6c2/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.919408481Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda5072acb-a3af-46a6-9b98-5b0623b96a12/57718e1249a5f35c50a73ad17a1694996de936bb2a37159c31cbfa1e94a0efc9/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.921094538Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2c9ed5ac1044fad5ac7c7ead9dca926a/8e9475c78e5b931b2d174b98c7f3095a1ee185519d00942f21d0ebf9092be1a0/hugetlb.2MB.events\""
	Dec 19 03:54:27 no-preload-728806 containerd[719]: time="2025-12-19T03:54:27.922506656Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8b96b916-9a7f-4d6d-9e31-aad0b0358a6b/6b66d3ad8ceefa6126aa63b7ab94cd560eddd093ff9d8d775639f4c6f9183d7e/hugetlb.2MB.events\""
	
	
	==> coredns [0daf3a0d964c769fdfcb3212d2577b256892a31a7645cfd15ea40a6da28089e8] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[INFO] plugin/ready: Plugins not ready: "kubernetes"
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[ERROR] plugin/kubernetes: Failed to watch
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	[INFO] Reloading complete
	[INFO] 127.0.0.1:47780 - 25751 "HINFO IN 5948633206316442089.7485547438095066407. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.016123219s
	
	
	==> coredns [e62eb42de605a3fd4f852cc3a22bbb2ed2dde85681502e3e3010ca301d879b82] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6e77f21cd6946b87ec86c565e2060aa5d23c02882cb22fd7a321b5e8cd0c8bdafe21968fcff406405707b988b753da21ecd190fe02329f1b569bfa74920cc0fa
	CoreDNS-1.13.1
	linux/amd64, go1.25.2, 1db4568
	[INFO] 127.0.0.1:52055 - 53030 "HINFO IN 742552530284827370.5350346264999543959. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.017263299s
	
	
	==> describe nodes <==
	Name:               no-preload-728806
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-728806
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=no-preload-728806
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_33_19_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:33:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-728806
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:54:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:54:09 +0000   Fri, 19 Dec 2025 03:33:13 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:54:09 +0000   Fri, 19 Dec 2025 03:33:13 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:54:09 +0000   Fri, 19 Dec 2025 03:33:13 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:54:09 +0000   Fri, 19 Dec 2025 03:36:00 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.172
	  Hostname:    no-preload-728806
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 de6d11f26d144166878a0f5a46eed7b7
	  System UUID:                de6d11f2-6d14-4166-878a-0f5a46eed7b7
	  Boot ID:                    1ee2860e-8c54-45ec-b573-c2c3ef6b4e05
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.35.0-rc.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-7d764666f9-x9688                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-no-preload-728806                                   100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-no-preload-728806                         250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-no-preload-728806                200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-9kmrc                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-no-preload-728806                         100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-5d785b57d4-9zx57                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-api-68f55bc586-nnhm4                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-7f98f4d65c-88rd2               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-78b7499b45-k5gpr               0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-594bbfb84b-m7bxg    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-web-7f7574785f-kl9q5                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  RegisteredNode  21m   node-controller  Node no-preload-728806 event: Registered Node no-preload-728806 in Controller
	  Normal  RegisteredNode  18m   node-controller  Node no-preload-728806 event: Registered Node no-preload-728806 in Controller
	
	
	==> dmesg <==
	[Dec19 03:35] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000010] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.000051] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.006017] (rpcbind)[117]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.892610] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.104228] kauditd_printk_skb: 85 callbacks suppressed
	[  +5.689117] kauditd_printk_skb: 199 callbacks suppressed
	[Dec19 03:36] kauditd_printk_skb: 227 callbacks suppressed
	[  +0.034544] kauditd_printk_skb: 95 callbacks suppressed
	[  +5.043875] kauditd_printk_skb: 183 callbacks suppressed
	[  +6.308125] kauditd_printk_skb: 18 callbacks suppressed
	[  +6.870794] kauditd_printk_skb: 32 callbacks suppressed
	[ +14.225211] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> etcd [69e38503d0f4a7b85114416cbd244c14828460424d87ddf9dcec627e11f6d019] <==
	{"level":"info","ts":"2025-12-19T03:36:08.195727Z","caller":"traceutil/trace.go:172","msg":"trace[1369365917] transaction","detail":"{read_only:false; response_revision:738; number_of_response:1; }","duration":"297.126525ms","start":"2025-12-19T03:36:07.898591Z","end":"2025-12-19T03:36:08.195718Z","steps":["trace[1369365917] 'process raft request'  (duration: 292.605953ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:36:08.196595Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"145.179425ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kubernetes-dashboard/admin-user\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:36:08.196703Z","caller":"traceutil/trace.go:172","msg":"trace[1905526564] range","detail":"{range_begin:/registry/serviceaccounts/kubernetes-dashboard/admin-user; range_end:; response_count:0; response_revision:743; }","duration":"145.289485ms","start":"2025-12-19T03:36:08.051406Z","end":"2025-12-19T03:36:08.196695Z","steps":["trace[1905526564] 'agreement among raft nodes before linearized reading'  (duration: 145.16394ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.198111Z","caller":"traceutil/trace.go:172","msg":"trace[420394175] transaction","detail":"{read_only:false; response_revision:740; number_of_response:1; }","duration":"224.163943ms","start":"2025-12-19T03:36:07.973933Z","end":"2025-12-19T03:36:08.198097Z","steps":["trace[420394175] 'process raft request'  (duration: 220.284119ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.198540Z","caller":"traceutil/trace.go:172","msg":"trace[127230267] transaction","detail":"{read_only:false; response_revision:739; number_of_response:1; }","duration":"224.649898ms","start":"2025-12-19T03:36:07.973879Z","end":"2025-12-19T03:36:08.198529Z","steps":["trace[127230267] 'process raft request'  (duration: 220.283799ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.201283Z","caller":"traceutil/trace.go:172","msg":"trace[259166362] transaction","detail":"{read_only:false; response_revision:741; number_of_response:1; }","duration":"220.872935ms","start":"2025-12-19T03:36:07.980329Z","end":"2025-12-19T03:36:08.201202Z","steps":["trace[259166362] 'process raft request'  (duration: 213.916481ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.203565Z","caller":"traceutil/trace.go:172","msg":"trace[937592532] transaction","detail":"{read_only:false; response_revision:742; number_of_response:1; }","duration":"221.329918ms","start":"2025-12-19T03:36:07.982223Z","end":"2025-12-19T03:36:08.203553Z","steps":["trace[937592532] 'process raft request'  (duration: 212.060307ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.206645Z","caller":"traceutil/trace.go:172","msg":"trace[1774479602] transaction","detail":"{read_only:false; response_revision:743; number_of_response:1; }","duration":"211.801088ms","start":"2025-12-19T03:36:07.994832Z","end":"2025-12-19T03:36:08.206633Z","steps":["trace[1774479602] 'process raft request'  (duration: 199.478067ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.207607Z","caller":"traceutil/trace.go:172","msg":"trace[290694141] transaction","detail":"{read_only:false; response_revision:745; number_of_response:1; }","duration":"193.621683ms","start":"2025-12-19T03:36:08.013978Z","end":"2025-12-19T03:36:08.207599Z","steps":["trace[290694141] 'process raft request'  (duration: 192.833777ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208012Z","caller":"traceutil/trace.go:172","msg":"trace[1293584312] transaction","detail":"{read_only:false; response_revision:744; number_of_response:1; }","duration":"205.579052ms","start":"2025-12-19T03:36:08.002423Z","end":"2025-12-19T03:36:08.208002Z","steps":["trace[1293584312] 'process raft request'  (duration: 204.327672ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208245Z","caller":"traceutil/trace.go:172","msg":"trace[646696037] transaction","detail":"{read_only:false; response_revision:746; number_of_response:1; }","duration":"193.584631ms","start":"2025-12-19T03:36:08.014650Z","end":"2025-12-19T03:36:08.208234Z","steps":["trace[646696037] 'process raft request'  (duration: 192.191164ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208271Z","caller":"traceutil/trace.go:172","msg":"trace[1269680450] transaction","detail":"{read_only:false; response_revision:747; number_of_response:1; }","duration":"192.881447ms","start":"2025-12-19T03:36:08.015385Z","end":"2025-12-19T03:36:08.208266Z","steps":["trace[1269680450] 'process raft request'  (duration: 191.528853ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:08.208357Z","caller":"traceutil/trace.go:172","msg":"trace[1596576251] transaction","detail":"{read_only:false; response_revision:748; number_of_response:1; }","duration":"192.815707ms","start":"2025-12-19T03:36:08.015535Z","end":"2025-12-19T03:36:08.208350Z","steps":["trace[1596576251] 'process raft request'  (duration: 191.740222ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:13.865837Z","caller":"traceutil/trace.go:172","msg":"trace[746629949] transaction","detail":"{read_only:false; response_revision:783; number_of_response:1; }","duration":"157.535196ms","start":"2025-12-19T03:36:13.708284Z","end":"2025-12-19T03:36:13.865819Z","steps":["trace[746629949] 'process raft request'  (duration: 157.431943ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:36:36.482358Z","caller":"traceutil/trace.go:172","msg":"trace[1390622980] linearizableReadLoop","detail":"{readStateIndex:875; appliedIndex:875; }","duration":"106.368191ms","start":"2025-12-19T03:36:36.375949Z","end":"2025-12-19T03:36:36.482317Z","steps":["trace[1390622980] 'read index received'  (duration: 106.360299ms)","trace[1390622980] 'applied index is now lower than readState.Index'  (duration: 7.214µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:36:36.502196Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.177833ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:36:36.502321Z","caller":"traceutil/trace.go:172","msg":"trace[820395789] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:822; }","duration":"126.374311ms","start":"2025-12-19T03:36:36.375929Z","end":"2025-12-19T03:36:36.502304Z","steps":["trace[820395789] 'agreement among raft nodes before linearized reading'  (duration: 106.606564ms)","trace[820395789] 'range keys from in-memory index tree'  (duration: 19.464267ms)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:37:25.399551Z","caller":"traceutil/trace.go:172","msg":"trace[366595385] transaction","detail":"{read_only:false; response_revision:886; number_of_response:1; }","duration":"117.892047ms","start":"2025-12-19T03:37:25.281633Z","end":"2025-12-19T03:37:25.399526Z","steps":["trace[366595385] 'process raft request'  (duration: 117.669312ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:37:27.573956Z","caller":"traceutil/trace.go:172","msg":"trace[600816068] transaction","detail":"{read_only:false; response_revision:887; number_of_response:1; }","duration":"160.16188ms","start":"2025-12-19T03:37:27.413773Z","end":"2025-12-19T03:37:27.573935Z","steps":["trace[600816068] 'process raft request'  (duration: 159.958935ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:45:55.560258Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1089}
	{"level":"info","ts":"2025-12-19T03:45:55.587246Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1089,"took":"26.357024ms","hash":276753803,"current-db-size-bytes":4255744,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":1716224,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-19T03:45:55.587361Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":276753803,"revision":1089,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:50:55.569123Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1335}
	{"level":"info","ts":"2025-12-19T03:50:55.573829Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1335,"took":"4.172729ms","hash":2251154073,"current-db-size-bytes":4255744,"current-db-size":"4.3 MB","current-db-size-in-use-bytes":2035712,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:50:55.574053Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2251154073,"revision":1335,"compact-revision":1089}
	
	
	==> etcd [d76db54c93b485c4b649a151f191b4a25903144828d128d58e6bc856e7adc487] <==
	{"level":"info","ts":"2025-12-19T03:33:52.844068Z","caller":"traceutil/trace.go:172","msg":"trace[172029000] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:439; }","duration":"181.60869ms","start":"2025-12-19T03:33:52.662450Z","end":"2025-12-19T03:33:52.844059Z","steps":["trace[172029000] 'range keys from in-memory index tree'  (duration: 181.2323ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:53.303795Z","caller":"traceutil/trace.go:172","msg":"trace[184832759] linearizableReadLoop","detail":"{readStateIndex:454; appliedIndex:454; }","duration":"139.889112ms","start":"2025-12-19T03:33:53.163890Z","end":"2025-12-19T03:33:53.303779Z","steps":["trace[184832759] 'read index received'  (duration: 139.881161ms)","trace[184832759] 'applied index is now lower than readState.Index'  (duration: 7.202µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:53.303971Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"140.045735ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5663"}
	{"level":"info","ts":"2025-12-19T03:33:53.304011Z","caller":"traceutil/trace.go:172","msg":"trace[497233565] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:439; }","duration":"140.119834ms","start":"2025-12-19T03:33:53.163884Z","end":"2025-12-19T03:33:53.304004Z","steps":["trace[497233565] 'agreement among raft nodes before linearized reading'  (duration: 139.967883ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:53.548755Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"244.319989ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5026750784611582435 > lease_revoke:<id:45c29b34ab30815c>","response":"size:29"}
	{"level":"info","ts":"2025-12-19T03:33:53.548865Z","caller":"traceutil/trace.go:172","msg":"trace[2032160460] linearizableReadLoop","detail":"{readStateIndex:455; appliedIndex:454; }","duration":"240.821592ms","start":"2025-12-19T03:33:53.308031Z","end":"2025-12-19T03:33:53.548853Z","steps":["trace[2032160460] 'read index received'  (duration: 55.056µs)","trace[2032160460] 'applied index is now lower than readState.Index'  (duration: 240.765601ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:53.548987Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"240.976509ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-728806\" limit:1 ","response":"range_response_count:1 size:3443"}
	{"level":"info","ts":"2025-12-19T03:33:53.549008Z","caller":"traceutil/trace.go:172","msg":"trace[1531602865] range","detail":"{range_begin:/registry/minions/no-preload-728806; range_end:; response_count:1; response_revision:439; }","duration":"241.003574ms","start":"2025-12-19T03:33:53.307997Z","end":"2025-12-19T03:33:53.549001Z","steps":["trace[1531602865] 'agreement among raft nodes before linearized reading'  (duration: 240.890835ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:57.789058Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.388109ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5663"}
	{"level":"info","ts":"2025-12-19T03:33:57.789165Z","caller":"traceutil/trace.go:172","msg":"trace[1207472380] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:443; }","duration":"150.508073ms","start":"2025-12-19T03:33:57.638641Z","end":"2025-12-19T03:33:57.789149Z","steps":["trace[1207472380] 'range keys from in-memory index tree'  (duration: 150.192687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:57.789538Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.226884ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5663"}
	{"level":"info","ts":"2025-12-19T03:33:57.789611Z","caller":"traceutil/trace.go:172","msg":"trace[722102572] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:443; }","duration":"125.27731ms","start":"2025-12-19T03:33:57.664285Z","end":"2025-12-19T03:33:57.789562Z","steps":["trace[722102572] 'range keys from in-memory index tree'  (duration: 125.115573ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:57.957635Z","caller":"traceutil/trace.go:172","msg":"trace[1444336190] transaction","detail":"{read_only:false; response_revision:444; number_of_response:1; }","duration":"145.813738ms","start":"2025-12-19T03:33:57.811801Z","end":"2025-12-19T03:33:57.957614Z","steps":["trace[1444336190] 'process raft request'  (duration: 145.592192ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:58.102313Z","caller":"traceutil/trace.go:172","msg":"trace[560799855] transaction","detail":"{read_only:false; response_revision:445; number_of_response:1; }","duration":"135.664689ms","start":"2025-12-19T03:33:57.966630Z","end":"2025-12-19T03:33:58.102295Z","steps":["trace[560799855] 'process raft request'  (duration: 71.392388ms)","trace[560799855] 'compare'  (duration: 64.082773ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.400706Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"236.886648ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-7d764666f9-x9688\" limit:1 ","response":"range_response_count:1 size:5485"}
	{"level":"info","ts":"2025-12-19T03:33:58.400772Z","caller":"traceutil/trace.go:172","msg":"trace[1007598697] range","detail":"{range_begin:/registry/pods/kube-system/coredns-7d764666f9-x9688; range_end:; response_count:1; response_revision:445; }","duration":"236.959596ms","start":"2025-12-19T03:33:58.163798Z","end":"2025-12-19T03:33:58.400757Z","steps":["trace[1007598697] 'agreement among raft nodes before linearized reading'  (duration: 42.53694ms)","trace[1007598697] 'range keys from in-memory index tree'  (duration: 194.268766ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.400858Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"194.30338ms","expected-duration":"100ms","prefix":"","request":"header:<ID:5026750784611582482 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" mod_revision:413 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" value_size:4103 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-12-19T03:33:58.401001Z","caller":"traceutil/trace.go:172","msg":"trace[1203841075] transaction","detail":"{read_only:false; response_revision:447; number_of_response:1; }","duration":"431.044616ms","start":"2025-12-19T03:33:57.969947Z","end":"2025-12-19T03:33:58.400992Z","steps":["trace[1203841075] 'process raft request'  (duration: 430.973541ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:58.401132Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:33:57.969935Z","time spent":"431.123085ms","remote":"127.0.0.1:42686","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1258,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-xwqww\" mod_revision:412 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-xwqww\" value_size:1199 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-xwqww\" > >"}
	{"level":"info","ts":"2025-12-19T03:33:58.401330Z","caller":"traceutil/trace.go:172","msg":"trace[1342167826] transaction","detail":"{read_only:false; response_revision:446; number_of_response:1; }","duration":"431.477899ms","start":"2025-12-19T03:33:57.969844Z","end":"2025-12-19T03:33:58.401322Z","steps":["trace[1342167826] 'process raft request'  (duration: 236.530374ms)","trace[1342167826] 'compare'  (duration: 194.196062ms)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.401372Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:33:57.969825Z","time spent":"431.523224ms","remote":"127.0.0.1:43152","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4163,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" mod_revision:413 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" value_size:4103 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-7d764666f9\" > >"}
	{"level":"info","ts":"2025-12-19T03:33:58.557545Z","caller":"traceutil/trace.go:172","msg":"trace[1424643893] linearizableReadLoop","detail":"{readStateIndex:464; appliedIndex:464; }","duration":"143.2294ms","start":"2025-12-19T03:33:58.414293Z","end":"2025-12-19T03:33:58.557523Z","steps":["trace[1424643893] 'read index received'  (duration: 143.221933ms)","trace[1424643893] 'applied index is now lower than readState.Index'  (duration: 6.576µs)"],"step_count":2}
	{"level":"warn","ts":"2025-12-19T03:33:58.561732Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"147.446706ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:33:58.561774Z","caller":"traceutil/trace.go:172","msg":"trace[25830164] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:447; }","duration":"147.500036ms","start":"2025-12-19T03:33:58.414266Z","end":"2025-12-19T03:33:58.561766Z","steps":["trace[25830164] 'agreement among raft nodes before linearized reading'  (duration: 143.371499ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:58.561889Z","caller":"traceutil/trace.go:172","msg":"trace[1197332737] transaction","detail":"{read_only:false; response_revision:448; number_of_response:1; }","duration":"148.544071ms","start":"2025-12-19T03:33:58.413331Z","end":"2025-12-19T03:33:58.561875Z","steps":["trace[1197332737] 'process raft request'  (duration: 144.355051ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:54:29 up 18 min,  0 users,  load average: 0.14, 0.24, 0.22
	Linux no-preload-728806 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [78fce5e539795bf0e161cfaab83885b769767cbc45af0a575c3e2e1d5d2ce929] <==
	I1219 03:33:18.590545       1 alloc.go:329] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:33:18.642677       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:33:23.132808       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:23.143632       1 cidrallocator.go:278] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:23.176279       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:33:23.541725       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:34:12.954989       1 conn.go:339] Error on socket receive: read tcp 192.168.50.172:8443->192.168.50.1:41766: use of closed network connection
	I1219 03:34:13.676429       1 handler.go:304] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:34:13.690090       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:13.690663       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:34:13.691030       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:34:13.860607       1 alloc.go:329] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.108.94.62"}
	W1219 03:34:13.873335       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:13.873739       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:34:13.883054       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:13.883109       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [8e9475c78e5b931b2d174b98c7f3095a1ee185519d00942f21d0ebf9092be1a0] <==
	I1219 03:50:58.179308       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:50:58.179543       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:50:58.179820       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:50:58.181037       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:51:58.180427       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:51:58.180623       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:51:58.180652       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:51:58.181708       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:51:58.181816       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:51:58.181827       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:53:58.181531       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:53:58.181778       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:53:58.181820       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:53:58.182202       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:53:58.182491       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:53:58.183673       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9e837e7d646d3029c97a82f448b8aa058a25d25934e9bc90a5d77e5e64e6b38d] <==
	I1219 03:33:22.405176       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.409261       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.410057       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.410865       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.405825       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.406331       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408392       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408300       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.396387       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.397085       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408403       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408524       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408411       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408418       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.397100       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408534       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408723       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408739       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.408746       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.447688       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:33:22.469765       1 range_allocator.go:433] "Set node PodCIDR" node="no-preload-728806" podCIDRs=["10.244.0.0/24"]
	I1219 03:33:22.548100       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.594503       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:22.594536       1 garbagecollector.go:166] "Garbage collector: all resource monitors have synced"
	I1219 03:33:22.594543       1 garbagecollector.go:169] "Proceeding to collect garbage"
	
	
	==> kube-controller-manager [d4b2c0b372751a1bc59d857101659a4a9bfe03f44e49264fc35f2b4bf8d1e6c2] <==
	I1219 03:48:04.328706       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:48:34.135347       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:48:34.339337       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:49:04.142080       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:49:04.349043       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:49:34.147701       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:49:34.358963       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:50:04.156541       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:50:04.370346       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:50:34.165610       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:50:34.384787       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:51:04.171524       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:51:04.395573       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:51:34.177837       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:51:34.407889       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:52:04.183923       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:52:04.417100       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:52:34.190133       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:52:34.427290       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:53:04.196064       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:53:04.440053       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:53:34.204080       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:53:34.450845       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:54:04.210653       1 resource_quota_controller.go:460] "Error during resource discovery" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:54:04.461553       1 garbagecollector.go:792] "failed to discover some groups" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [18e278297341f2ef6879a8f2a0674670948d8c3f6abb39a02ef05bd379a52a3e] <==
	I1219 03:35:59.338718       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:35:59.439239       1 shared_informer.go:377] "Caches are synced"
	I1219 03:35:59.439294       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.172"]
	E1219 03:35:59.439917       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:35:59.492895       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:35:59.492993       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:35:59.493095       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:35:59.503378       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:35:59.504604       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:35:59.504640       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:59.510407       1 config.go:200] "Starting service config controller"
	I1219 03:35:59.510793       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:35:59.511053       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:35:59.511124       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:35:59.511310       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:35:59.511373       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:35:59.512229       1 config.go:309] "Starting node config controller"
	I1219 03:35:59.512355       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:35:59.512373       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:35:59.611525       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:35:59.611555       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:35:59.611592       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [ba269e021c7b586768ea42947bca487c6a450c93b996c1fef9978ea650ccfa4f] <==
	I1219 03:33:25.374424       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:33:25.475572       1 shared_informer.go:377] "Caches are synced"
	I1219 03:33:25.475639       1 server.go:218] "Successfully retrieved NodeIPs" NodeIPs=["192.168.50.172"]
	E1219 03:33:25.475810       1 server.go:255] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:33:25.574978       1 server_linux.go:107] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:33:25.575374       1 server.go:266] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:33:25.575497       1 server_linux.go:136] "Using iptables Proxier"
	I1219 03:33:25.589304       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:33:25.590158       1 server.go:529] "Version info" version="v1.35.0-rc.1"
	I1219 03:33:25.590614       1 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:33:25.601510       1 config.go:200] "Starting service config controller"
	I1219 03:33:25.601721       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:33:25.601933       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:33:25.602061       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:33:25.602134       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:33:25.602299       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:33:25.604477       1 config.go:309] "Starting node config controller"
	I1219 03:33:25.604492       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:33:25.702099       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:33:25.702282       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:33:25.702561       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:33:25.705300       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [31fe13610d626eb1358e924d8da122c03230ae2e208af3021661987752f9fb4a] <==
	I1219 03:35:55.174512       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:35:56.981884       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:35:56.981984       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:35:56.982135       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:35:56.982144       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:35:57.075904       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.35.0-rc.1"
	I1219 03:35:57.079777       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:35:57.087998       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:35:57.088254       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:35:57.092862       1 shared_informer.go:370] "Waiting for caches to sync"
	I1219 03:35:57.088331       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 03:35:57.161946       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": RBAC: [role.rbac.authorization.k8s.io \"extension-apiserver-authentication-reader\" not found, role.rbac.authorization.k8s.io \"system::leader-locking-kube-scheduler\" not found]" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	I1219 03:35:58.693823       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kube-scheduler [e95ce55118a31daf218148578c1b544b8ed677b36adb51f5f11f5c4b4fe7c908] <==
	E1219 03:33:15.625993       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:33:15.626737       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Node"
	E1219 03:33:15.626778       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolumeClaim"
	E1219 03:33:15.626812       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 03:33:15.628170       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:33:15.628509       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:33:15.628789       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:33:15.628987       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	E1219 03:33:15.629157       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:33:16.433594       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicationController"
	E1219 03:33:16.469822       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Namespace"
	E1219 03:33:16.481395       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceSlice"
	E1219 03:33:16.491967       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1693" type="*v1.ConfigMap"
	E1219 03:33:16.585593       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ResourceClaim"
	E1219 03:33:16.593438       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PersistentVolume"
	E1219 03:33:16.595573       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSINode"
	E1219 03:33:16.601532       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIDriver"
	E1219 03:33:16.660912       1 reflector.go:204] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.VolumeAttachment"
	E1219 03:33:16.698560       1 reflector.go:204] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.PodDisruptionBudget"
	E1219 03:33:16.722638       1 reflector.go:204] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.StatefulSet"
	E1219 03:33:16.855068       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Service"
	E1219 03:33:16.875884       1 reflector.go:204] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.Pod"
	E1219 03:33:16.887586       1 reflector.go:204] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.ReplicaSet"
	E1219 03:33:16.943310       1 reflector.go:204] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:161" type="*v1.CSIStorageCapacity"
	I1219 03:33:19.205442       1 shared_informer.go:377] "Caches are synced"
	
	
	==> kubelet <==
	Dec 19 03:52:52 no-preload-728806 kubelet[1076]: E1219 03:52:52.404496    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:52:52 no-preload-728806 kubelet[1076]: E1219 03:52:52.406353    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:52:54 no-preload-728806 kubelet[1076]: E1219 03:52:54.405176    1076 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-m7bxg" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:52:55 no-preload-728806 kubelet[1076]: E1219 03:52:55.406018    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x9688" containerName="coredns"
	Dec 19 03:53:06 no-preload-728806 kubelet[1076]: E1219 03:53:06.405607    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:53:06 no-preload-728806 kubelet[1076]: E1219 03:53:06.407145    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:53:21 no-preload-728806 kubelet[1076]: E1219 03:53:21.404852    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:53:21 no-preload-728806 kubelet[1076]: E1219 03:53:21.408046    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:53:25 no-preload-728806 kubelet[1076]: E1219 03:53:25.405185    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/etcd-no-preload-728806" containerName="etcd"
	Dec 19 03:53:34 no-preload-728806 kubelet[1076]: E1219 03:53:34.405116    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:53:34 no-preload-728806 kubelet[1076]: E1219 03:53:34.408970    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:53:40 no-preload-728806 kubelet[1076]: E1219 03:53:40.405283    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-scheduler-no-preload-728806" containerName="kube-scheduler"
	Dec 19 03:53:47 no-preload-728806 kubelet[1076]: E1219 03:53:47.406052    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:53:47 no-preload-728806 kubelet[1076]: E1219 03:53:47.408401    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:53:49 no-preload-728806 kubelet[1076]: E1219 03:53:49.409099    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-kong-78b7499b45-k5gpr" containerName="proxy"
	Dec 19 03:53:56 no-preload-728806 kubelet[1076]: E1219 03:53:56.405600    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-controller-manager-no-preload-728806" containerName="kube-controller-manager"
	Dec 19 03:54:01 no-preload-728806 kubelet[1076]: E1219 03:54:01.409189    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/coredns-7d764666f9-x9688" containerName="coredns"
	Dec 19 03:54:01 no-preload-728806 kubelet[1076]: E1219 03:54:01.409340    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:54:01 no-preload-728806 kubelet[1076]: E1219 03:54:01.410740    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:54:12 no-preload-728806 kubelet[1076]: E1219 03:54:12.404737    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:54:12 no-preload-728806 kubelet[1076]: E1219 03:54:12.406613    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	Dec 19 03:54:13 no-preload-728806 kubelet[1076]: E1219 03:54:13.405401    1076 prober_manager.go:197] "Startup probe already exists for container" pod="kube-system/kube-apiserver-no-preload-728806" containerName="kube-apiserver"
	Dec 19 03:54:17 no-preload-728806 kubelet[1076]: E1219 03:54:17.404934    1076 prober_manager.go:221] "Liveness probe already exists for container" pod="kubernetes-dashboard/kubernetes-dashboard-metrics-scraper-594bbfb84b-m7bxg" containerName="kubernetes-dashboard-metrics-scraper"
	Dec 19 03:54:24 no-preload-728806 kubelet[1076]: E1219 03:54:24.405263    1076 prober_manager.go:209] "Readiness probe already exists for container" pod="kube-system/metrics-server-5d785b57d4-9zx57" containerName="metrics-server"
	Dec 19 03:54:24 no-preload-728806 kubelet[1076]: E1219 03:54:24.407901    1076 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-5d785b57d4-9zx57" podUID="be6c99aa-e581-4361-bbc6-76116378a05f"
	
	
	==> kubernetes-dashboard [47bce22eb1c3d05d42b992a32a07de7295a66c8221d28877512dd638f7211103] <==
	10.244.0.1 - - [19/Dec/2025:03:51:56 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:51:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:52:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:52:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:52:26 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:52:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:52:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:52:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:52:56 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:52:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:53:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:53:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:53:26 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:53:38 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:53:48 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:53:56 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:58 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:54:08 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:54:18 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	10.244.0.1 - - [19/Dec/2025:03:54:26 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:54:28 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.35+"
	E1219 03:52:16.193670       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:53:16.192021       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:54:16.197840       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [4ab0badeeeac9066abd92e7322c2ebdf2519d10c318b6dbb9979651de9f47051] <==
	I1219 03:36:12.895678       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:36:12.895771       1 init.go:49] Using in-cluster config
	I1219 03:36:12.895970       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [7b0a529acf54eff15dbdb1f47e2ad6584d347a66e1e80bcb4d9664c4444610fd] <==
	I1219 03:36:26.094064       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:36:26.094175       1 init.go:49] Using in-cluster config
	I1219 03:36:26.094617       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:36:26.094634       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:36:26.094642       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:36:26.094648       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:36:26.101505       1 main.go:119] "Successful initial request to the apiserver" version="v1.35.0-rc.1"
	I1219 03:36:26.101675       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:36:26.122225       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:36:26.127891       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [f43ad99d90fdcbc40aff01641cbe5c36c47cb8ac50d0891cd9bb37a3ba555617] <==
	I1219 03:36:22.512883       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:36:22.513145       1 init.go:48] Using in-cluster config
	I1219 03:36:22.513774       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [944453a82c5aa9d3c5e09a2a924586b9708bf3f99f7878d28d61b7e7c1fee4c8] <==
	W1219 03:54:05.505377       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:07.513501       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:07.522274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:09.526647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:09.532388       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:11.535912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:11.542279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:13.547094       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:13.553315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:15.556690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:15.566059       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:17.570665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:17.576834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:19.582072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:19.590937       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:21.596187       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:21.602356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:23.606697       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:23.616191       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:25.622686       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:25.631752       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:27.636217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:27.645728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:29.652346       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:29.662102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [974c9df3fdb6ebf85707dff617f7db917a0a2dec07eec91af2ef490c42f3aeb8] <==
	I1219 03:35:59.143005       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:36:29.154957       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-728806 -n no-preload-728806
helpers_test.go:270: (dbg) Run:  kubectl --context no-preload-728806 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-5d785b57d4-9zx57
helpers_test.go:283: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context no-preload-728806 describe pod metrics-server-5d785b57d4-9zx57
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context no-preload-728806 describe pod metrics-server-5d785b57d4-9zx57: exit status 1 (68.985809ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-5d785b57d4-9zx57" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context no-preload-728806 describe pod metrics-server-5d785b57d4-9zx57: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (542.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:285: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832734 -n embed-certs-832734
start_stop_delete_test.go:285: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:54:58.602815804 +0000 UTC m=+5385.417444614
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-832734 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context embed-certs-832734 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (86.491162ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-832734 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832734 -n embed-certs-832734
helpers_test.go:253: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-832734 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-832734 logs -n 25: (1.759078285s)
helpers_test.go:261: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────
────────────────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────
────────────────┤
	│ stop    │ -p no-preload-728806 --alsologtostderr -v=3                                                                                                                                                                                                            │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                               │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:34 UTC │
	│ stop    │ -p embed-certs-832734 --alsologtostderr -v=3                                                                                                                                                                                                           │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:34 UTC │ 19 Dec 25 03:35 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                     │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ stop    │ -p default-k8s-diff-port-382606 --alsologtostderr -v=3                                                                                                                                                                                                 │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0      │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	│ image   │ old-k8s-version-638861 image list --format=json                                                                                                                                                                                                        │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ pause   │ -p old-k8s-version-638861 --alsologtostderr -v=1                                                                                                                                                                                                       │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ unpause │ -p old-k8s-version-638861 --alsologtostderr -v=1                                                                                                                                                                                                       │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p old-k8s-version-638861                                                                                                                                                                                                                              │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p old-k8s-version-638861                                                                                                                                                                                                                              │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p newest-cni-979595 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-979595            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │                     │
	│ image   │ no-preload-728806 image list --format=json                                                                                                                                                                                                             │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ pause   │ -p no-preload-728806 --alsologtostderr -v=1                                                                                                                                                                                                            │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ unpause │ -p no-preload-728806 --alsologtostderr -v=1                                                                                                                                                                                                            │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p no-preload-728806                                                                                                                                                                                                                                   │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p no-preload-728806                                                                                                                                                                                                                                   │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p guest-269272                                                                                                                                                                                                                                        │ guest-269272                 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────
────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:27.192284   55963 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:27.192554   55963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:27.192564   55963 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:27.192569   55963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:27.192814   55963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:54:27.193330   55963 out.go:368] Setting JSON to false
	I1219 03:54:27.194272   55963 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5806,"bootTime":1766110661,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:27.194323   55963 start.go:143] virtualization: kvm guest
	I1219 03:54:27.196187   55963 out.go:179] * [newest-cni-979595] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:27.197745   55963 notify.go:221] Checking for updates...
	I1219 03:54:27.197758   55963 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:27.198924   55963 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:27.200214   55963 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:54:27.201254   55963 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:54:27.202292   55963 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:27.203305   55963 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:27.205091   55963 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:54:27.205251   55963 config.go:182] Loaded profile config "embed-certs-832734": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:54:27.205388   55963 config.go:182] Loaded profile config "guest-269272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1219 03:54:27.205524   55963 config.go:182] Loaded profile config "no-preload-728806": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:54:27.205679   55963 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:27.243710   55963 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 03:54:27.244920   55963 start.go:309] selected driver: kvm2
	I1219 03:54:27.244948   55963 start.go:928] validating driver "kvm2" against <nil>
	I1219 03:54:27.244979   55963 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:27.245942   55963 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:54:27.245993   55963 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:54:27.246255   55963 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:54:27.246287   55963 cni.go:84] Creating CNI manager for ""
	I1219 03:54:27.246341   55963 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:54:27.246351   55963 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 03:54:27.246403   55963 start.go:353] cluster config:
	{Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:27.246526   55963 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:27.247860   55963 out.go:179] * Starting "newest-cni-979595" primary control-plane node in "newest-cni-979595" cluster
	I1219 03:54:27.248846   55963 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:54:27.248875   55963 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	I1219 03:54:27.248883   55963 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:27.248960   55963 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:27.248973   55963 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1219 03:54:27.249080   55963 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json ...
	I1219 03:54:27.249100   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json: {Name:mk44e3bf87006423b68d2f8f5d5aa41ebe28e61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:27.249264   55963 start.go:360] acquireMachinesLock for newest-cni-979595: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:27.249299   55963 start.go:364] duration metric: took 18.947µs to acquireMachinesLock for "newest-cni-979595"
	I1219 03:54:27.249322   55963 start.go:93] Provisioning new machine with config: &{Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:54:27.249405   55963 start.go:125] createHost starting for "" (driver="kvm2")
	I1219 03:54:27.251294   55963 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1219 03:54:27.251452   55963 start.go:159] libmachine.API.Create for "newest-cni-979595" (driver="kvm2")
	I1219 03:54:27.251487   55963 client.go:173] LocalClient.Create starting
	I1219 03:54:27.251557   55963 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem
	I1219 03:54:27.251601   55963 main.go:144] libmachine: Decoding PEM data...
	I1219 03:54:27.251630   55963 main.go:144] libmachine: Parsing certificate...
	I1219 03:54:27.251691   55963 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem
	I1219 03:54:27.251719   55963 main.go:144] libmachine: Decoding PEM data...
	I1219 03:54:27.251742   55963 main.go:144] libmachine: Parsing certificate...
	I1219 03:54:27.252120   55963 main.go:144] libmachine: creating domain...
	I1219 03:54:27.252133   55963 main.go:144] libmachine: creating network...
	I1219 03:54:27.253367   55963 main.go:144] libmachine: found existing default network
	I1219 03:54:27.253569   55963 main.go:144] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 03:54:27.254314   55963 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d2:9b:dc} reservation:<nil>}
	I1219 03:54:27.254912   55963 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:87:10} reservation:<nil>}
	I1219 03:54:27.255808   55963 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d198f0}
	I1219 03:54:27.255889   55963 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-newest-cni-979595</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 03:54:27.261039   55963 main.go:144] libmachine: creating private network mk-newest-cni-979595 192.168.61.0/24...
	I1219 03:54:27.333631   55963 main.go:144] libmachine: private network mk-newest-cni-979595 192.168.61.0/24 created
	I1219 03:54:27.333909   55963 main.go:144] libmachine: <network>
	  <name>mk-newest-cni-979595</name>
	  <uuid>44f67358-f9a8-4ac6-8075-f452ded8ea4a</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:51:d9:16'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 03:54:27.333935   55963 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595 ...
	I1219 03:54:27.333964   55963 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22230-5003/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1219 03:54:27.333978   55963 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:54:27.334111   55963 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22230-5003/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22230-5003/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1219 03:54:27.614812   55963 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa...
	I1219 03:54:27.814741   55963 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/newest-cni-979595.rawdisk...
	I1219 03:54:27.814775   55963 main.go:144] libmachine: Writing magic tar header
	I1219 03:54:27.814810   55963 main.go:144] libmachine: Writing SSH key tar header
	I1219 03:54:27.814886   55963 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595 ...
	I1219 03:54:27.814972   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595
	I1219 03:54:27.815003   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595 (perms=drwx------)
	I1219 03:54:27.815031   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003/.minikube/machines
	I1219 03:54:27.815041   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003/.minikube/machines (perms=drwxr-xr-x)
	I1219 03:54:27.815055   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:54:27.815063   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003/.minikube (perms=drwxr-xr-x)
	I1219 03:54:27.815077   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003
	I1219 03:54:27.815088   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003 (perms=drwxrwxr-x)
	I1219 03:54:27.815101   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1219 03:54:27.815122   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1219 03:54:27.815135   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1219 03:54:27.815155   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1219 03:54:27.815165   55963 main.go:144] libmachine: checking permissions on dir: /home
	I1219 03:54:27.815180   55963 main.go:144] libmachine: skipping /home - not owner
	I1219 03:54:27.815190   55963 main.go:144] libmachine: defining domain...
	I1219 03:54:27.816307   55963 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>newest-cni-979595</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/newest-cni-979595.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-newest-cni-979595'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:27.825266   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:aa:ba:f9 in network default
	I1219 03:54:27.825936   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:27.825953   55963 main.go:144] libmachine: starting domain...
	I1219 03:54:27.825958   55963 main.go:144] libmachine: ensuring networks are active...
	I1219 03:54:27.826782   55963 main.go:144] libmachine: Ensuring network default is active
	I1219 03:54:27.827317   55963 main.go:144] libmachine: Ensuring network mk-newest-cni-979595 is active
	I1219 03:54:27.828166   55963 main.go:144] libmachine: getting domain XML...
	I1219 03:54:27.829724   55963 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-979595</name>
	  <uuid>8e86e3db-9966-4ac4-b938-c6eab04a469d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/newest-cni-979595.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:e5:ac:6b'/>
	      <source network='mk-newest-cni-979595'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:aa:ba:f9'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:29.311758   55963 main.go:144] libmachine: waiting for domain to start...
	I1219 03:54:29.313669   55963 main.go:144] libmachine: domain is now running
	I1219 03:54:29.313692   55963 main.go:144] libmachine: waiting for IP...
	I1219 03:54:29.314479   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:29.315133   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:29.315163   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:29.315616   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:29.315661   55963 retry.go:31] will retry after 195.944022ms: waiting for domain to come up
	I1219 03:54:29.513308   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:29.514098   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:29.514115   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:29.514495   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:29.514524   55963 retry.go:31] will retry after 336.104596ms: waiting for domain to come up
	I1219 03:54:29.852162   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:29.852921   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:29.852943   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:29.853568   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:29.853604   55963 retry.go:31] will retry after 471.020747ms: waiting for domain to come up
	I1219 03:54:30.326166   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:30.326866   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:30.326882   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:30.327232   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:30.327266   55963 retry.go:31] will retry after 592.062409ms: waiting for domain to come up
	I1219 03:54:30.921138   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:30.921852   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:30.921871   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:30.922302   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:30.922347   55963 retry.go:31] will retry after 705.614256ms: waiting for domain to come up
	I1219 03:54:31.629278   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:31.630231   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:31.630247   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:31.630530   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:31.630559   55963 retry.go:31] will retry after 891.599258ms: waiting for domain to come up
	I1219 03:54:32.524303   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:32.525099   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:32.525123   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:32.525592   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:32.525630   55963 retry.go:31] will retry after 1.059476047s: waiting for domain to come up
	I1219 03:54:33.586696   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:33.587479   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:33.587495   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:33.587854   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:33.587889   55963 retry.go:31] will retry after 1.052217642s: waiting for domain to come up
	I1219 03:54:34.642148   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:34.642841   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:34.642859   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:34.643358   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:34.643396   55963 retry.go:31] will retry after 1.614190229s: waiting for domain to come up
	I1219 03:54:36.260579   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:36.261531   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:36.261553   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:36.261978   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:36.262041   55963 retry.go:31] will retry after 1.822049353s: waiting for domain to come up
	I1219 03:54:38.085927   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:38.086803   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:38.086849   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:38.087298   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:38.087347   55963 retry.go:31] will retry after 2.017219155s: waiting for domain to come up
	I1219 03:54:40.107418   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:40.108157   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:40.108174   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:40.108505   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:40.108542   55963 retry.go:31] will retry after 3.223669681s: waiting for domain to come up
	I1219 03:54:43.333850   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:43.334542   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:43.334562   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:43.334851   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:43.334884   55963 retry.go:31] will retry after 4.229098773s: waiting for domain to come up
	I1219 03:54:47.565185   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.565919   55963 main.go:144] libmachine: domain newest-cni-979595 has current primary IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.565937   55963 main.go:144] libmachine: found domain IP: 192.168.61.160
	I1219 03:54:47.565947   55963 main.go:144] libmachine: reserving static IP address...
	I1219 03:54:47.566484   55963 main.go:144] libmachine: unable to find host DHCP lease matching {name: "newest-cni-979595", mac: "52:54:00:e5:ac:6b", ip: "192.168.61.160"} in network mk-newest-cni-979595
	I1219 03:54:47.797312   55963 main.go:144] libmachine: reserved static IP address 192.168.61.160 for domain newest-cni-979595
	I1219 03:54:47.797333   55963 main.go:144] libmachine: waiting for SSH...
	I1219 03:54:47.797340   55963 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:54:47.800771   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.801232   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:47.801262   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.801457   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:47.801747   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:47.801763   55963 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:54:47.919892   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:47.920293   55963 main.go:144] libmachine: domain creation complete
	I1219 03:54:47.922141   55963 machine.go:94] provisionDockerMachine start ...
	I1219 03:54:47.924796   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.925277   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:47.925320   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.925478   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:47.925680   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:47.925691   55963 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:54:48.036148   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:54:48.036176   55963 buildroot.go:166] provisioning hostname "newest-cni-979595"
	I1219 03:54:48.038927   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.039370   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.039395   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.039564   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:48.039817   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:48.039830   55963 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-979595 && echo "newest-cni-979595" | sudo tee /etc/hostname
	I1219 03:54:48.164124   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-979595
	
	I1219 03:54:48.167425   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.168000   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.168042   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.168230   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:48.168486   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:48.168511   55963 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-979595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-979595/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-979595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:54:48.284626   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:48.284663   55963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5003/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5003/.minikube}
	I1219 03:54:48.284702   55963 buildroot.go:174] setting up certificates
	I1219 03:54:48.284719   55963 provision.go:84] configureAuth start
	I1219 03:54:48.288127   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.288496   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.288517   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.291563   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.292579   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.292607   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.292754   55963 provision.go:143] copyHostCerts
	I1219 03:54:48.292800   55963 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem, removing ...
	I1219 03:54:48.292817   55963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem
	I1219 03:54:48.292883   55963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem (1123 bytes)
	I1219 03:54:48.293066   55963 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem, removing ...
	I1219 03:54:48.293078   55963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem
	I1219 03:54:48.293116   55963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem (1675 bytes)
	I1219 03:54:48.293192   55963 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem, removing ...
	I1219 03:54:48.293200   55963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem
	I1219 03:54:48.293223   55963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem (1082 bytes)
	I1219 03:54:48.293284   55963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem org=jenkins.newest-cni-979595 san=[127.0.0.1 192.168.61.160 localhost minikube newest-cni-979595]
	I1219 03:54:48.375174   55963 provision.go:177] copyRemoteCerts
	I1219 03:54:48.375228   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:54:48.377509   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.377839   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.377860   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.378000   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.461874   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:54:48.493405   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:54:48.522314   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:54:48.551219   55963 provision.go:87] duration metric: took 266.486029ms to configureAuth
	I1219 03:54:48.551244   55963 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:54:48.551410   55963 config.go:182] Loaded profile config "newest-cni-979595": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:54:48.551423   55963 machine.go:97] duration metric: took 629.260739ms to provisionDockerMachine
	I1219 03:54:48.551440   55963 client.go:176] duration metric: took 21.299942263s to LocalClient.Create
	I1219 03:54:48.551465   55963 start.go:167] duration metric: took 21.300013394s to libmachine.API.Create "newest-cni-979595"
	I1219 03:54:48.551477   55963 start.go:293] postStartSetup for "newest-cni-979595" (driver="kvm2")
	I1219 03:54:48.551495   55963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:54:48.551551   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:54:48.554390   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.554822   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.554852   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.554987   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.638536   55963 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:54:48.643902   55963 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:54:48.643935   55963 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/addons for local assets ...
	I1219 03:54:48.644027   55963 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/files for local assets ...
	I1219 03:54:48.644120   55963 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem -> 89782.pem in /etc/ssl/certs
	I1219 03:54:48.644229   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:54:48.656123   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:54:48.685976   55963 start.go:296] duration metric: took 134.484496ms for postStartSetup
	I1219 03:54:48.689208   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.689622   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.689654   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.689877   55963 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json ...
	I1219 03:54:48.690063   55963 start.go:128] duration metric: took 21.440642543s to createHost
	I1219 03:54:48.692138   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.692449   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.692470   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.692600   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:48.692792   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:48.692802   55963 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:54:48.795176   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766116488.758832895
	
	I1219 03:54:48.795200   55963 fix.go:216] guest clock: 1766116488.758832895
	I1219 03:54:48.795210   55963 fix.go:229] Guest: 2025-12-19 03:54:48.758832895 +0000 UTC Remote: 2025-12-19 03:54:48.690082937 +0000 UTC m=+21.547019868 (delta=68.749958ms)
	I1219 03:54:48.795228   55963 fix.go:200] guest clock delta is within tolerance: 68.749958ms
	I1219 03:54:48.795233   55963 start.go:83] releasing machines lock for "newest-cni-979595", held for 21.545923375s
	I1219 03:54:48.798396   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.798826   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.798854   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.799347   55963 ssh_runner.go:195] Run: cat /version.json
	I1219 03:54:48.799418   55963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:54:48.802630   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.802754   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.803071   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.803102   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.803139   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.803169   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.803248   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.803496   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.880163   55963 ssh_runner.go:195] Run: systemctl --version
	I1219 03:54:48.907995   55963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:54:48.915513   55963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:54:48.915568   55963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:54:48.937521   55963 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:54:48.937541   55963 start.go:496] detecting cgroup driver to use...
	I1219 03:54:48.937598   55963 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:54:48.975395   55963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:54:48.991809   55963 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:54:48.991870   55963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:54:49.008941   55963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:54:49.024827   55963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:54:49.178964   55963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:54:49.388888   55963 docker.go:234] disabling docker service ...
	I1219 03:54:49.388978   55963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:54:49.407543   55963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:54:49.422806   55963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:54:49.586233   55963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:54:49.732662   55963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:54:49.752396   55963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:54:49.774745   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:54:49.786990   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:54:49.799774   55963 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 03:54:49.799853   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 03:54:49.811857   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:54:49.826340   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:54:49.842128   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:54:49.853645   55963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:54:49.866523   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:54:49.877882   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:54:49.889916   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:54:49.901808   55963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:54:49.911960   55963 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:54:49.911999   55963 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:54:49.932537   55963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:54:49.943443   55963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:50.082414   55963 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:54:50.122071   55963 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:54:50.122138   55963 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:54:50.128791   55963 retry.go:31] will retry after 1.123108402s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1219 03:54:51.252775   55963 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:54:51.259802   55963 start.go:564] Will wait 60s for crictl version
	I1219 03:54:51.259877   55963 ssh_runner.go:195] Run: which crictl
	I1219 03:54:51.264736   55963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:54:51.298280   55963 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:54:51.298358   55963 ssh_runner.go:195] Run: containerd --version
	I1219 03:54:51.322336   55963 ssh_runner.go:195] Run: containerd --version
	I1219 03:54:51.345443   55963 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1219 03:54:51.349510   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:51.350031   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:51.350072   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:51.350286   55963 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1219 03:54:51.354465   55963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:51.370765   55963 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:54:51.371964   55963 kubeadm.go:884] updating cluster {Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.160 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:54:51.372098   55963 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:54:51.372148   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:51.402179   55963 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1219 03:54:51.402289   55963 ssh_runner.go:195] Run: which lz4
	I1219 03:54:51.406369   55963 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:54:51.411220   55963 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:54:51.411241   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (340150867 bytes)
	I1219 03:54:52.713413   55963 containerd.go:563] duration metric: took 1.307076252s to copy over tarball
	I1219 03:54:52.713484   55963 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:54:54.157255   55963 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.443741815s)
	I1219 03:54:54.157281   55963 containerd.go:570] duration metric: took 1.443843414s to extract the tarball
	I1219 03:54:54.157288   55963 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:54:54.194835   55963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:54.337054   55963 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:54:54.387145   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:54.412639   55963 retry.go:31] will retry after 353.788959ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:54Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:54.767494   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:54.799284   55963 retry.go:31] will retry after 322.230976ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:54Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:55.121812   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:55.149569   55963 retry.go:31] will retry after 331.901788ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:55Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:55.482241   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:55.510785   55963 retry.go:31] will retry after 1.176527515s: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:55Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:56.688138   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:56.722080   55963 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:54:56.722111   55963 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:54:56.722121   55963 kubeadm.go:935] updating node { 192.168.61.160 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:54:56.722247   55963 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-979595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:54:56.722328   55963 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:54:56.753965   55963 cni.go:84] Creating CNI manager for ""
	I1219 03:54:56.753986   55963 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:54:56.754002   55963 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:54:56.754037   55963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.160 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-979595 NodeName:newest-cni-979595 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:54:56.754146   55963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-979595"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.160"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.160"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:54:56.754224   55963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:54:56.766080   55963 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:54:56.766148   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:54:56.777281   55963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1219 03:54:56.797092   55963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:54:56.817304   55963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2239 bytes)
	I1219 03:54:56.836407   55963 ssh_runner.go:195] Run: grep 192.168.61.160	control-plane.minikube.internal$ /etc/hosts
	I1219 03:54:56.840292   55963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:56.853974   55963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:56.992232   55963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:57.011998   55963 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595 for IP: 192.168.61.160
	I1219 03:54:57.012043   55963 certs.go:195] generating shared ca certs ...
	I1219 03:54:57.012064   55963 certs.go:227] acquiring lock for ca certs: {Name:mk6db7e23547b9013e447eaa0ddba18e05213211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.012227   55963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key
	I1219 03:54:57.012306   55963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key
	I1219 03:54:57.012329   55963 certs.go:257] generating profile certs ...
	I1219 03:54:57.012411   55963 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.key
	I1219 03:54:57.012428   55963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.crt with IP's: []
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	a22c8439567b6       6e38f40d628db       17 minutes ago      Running             storage-provisioner                    2                   257f02e5c309a       storage-provisioner                                     kube-system
	e1e5b294ce0f7       d9cbc9f4053ca       17 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   816b3bed82d12       kubernetes-dashboard-metrics-scraper-7685fd8b77-hjg9p   kubernetes-dashboard
	4a8a4c0810150       dd54374d0ab14       17 minutes ago      Running             kubernetes-dashboard-auth              0                   a98047015975b       kubernetes-dashboard-auth-5dd694bb47-w8bnh              kubernetes-dashboard
	a8e8ce1f347b7       a0607af4fcd8a       18 minutes ago      Running             kubernetes-dashboard-api               0                   0e7ea101ca745       kubernetes-dashboard-api-6549569bf5-86vvf               kubernetes-dashboard
	633ebe42c481f       59f642f485d26       18 minutes ago      Running             kubernetes-dashboard-web               0                   cfdd3c0920783       kubernetes-dashboard-web-5c9f966b98-z4wvm               kubernetes-dashboard
	bf9b8c16fb0f9       3a975970da2f5       18 minutes ago      Running             proxy                                  0                   6d4b0b1658432       kubernetes-dashboard-kong-9849c64bd-8sndd               kubernetes-dashboard
	439ed4dabc331       3a975970da2f5       18 minutes ago      Exited              clear-stale-pid                        0                   6d4b0b1658432       kubernetes-dashboard-kong-9849c64bd-8sndd               kubernetes-dashboard
	6a1898be03e51       52546a367cc9e       18 minutes ago      Running             coredns                                1                   a2dd723df1281       coredns-66bc5c9577-4csbt                                kube-system
	772d872ebeddd       56cc512116c8f       18 minutes ago      Running             busybox                                1                   9033c74168050       busybox                                                 default
	42e0e0df29296       6e38f40d628db       18 minutes ago      Exited              storage-provisioner                    1                   257f02e5c309a       storage-provisioner                                     kube-system
	9cb8a6c954574       36eef8e07bdd6       18 minutes ago      Running             kube-proxy                             1                   418e1caa3fec3       kube-proxy-j49gn                                        kube-system
	376bae94b419b       a3e246e9556e9       18 minutes ago      Running             etcd                                   1                   39a6b45405e06       etcd-embed-certs-832734                                 kube-system
	c4fe189224bd9       aec12dadf56dd       18 minutes ago      Running             kube-scheduler                         1                   62a0ab5babbe8       kube-scheduler-embed-certs-832734                       kube-system
	cc6fed85dd6b5       5826b25d990d7       18 minutes ago      Running             kube-controller-manager                1                   7b805c6dcca16       kube-controller-manager-embed-certs-832734              kube-system
	ecf7299638b47       aa27095f56193       18 minutes ago      Running             kube-apiserver                         1                   f7f61a577f0ad       kube-apiserver-embed-certs-832734                       kube-system
	5f35a042b5286       56cc512116c8f       20 minutes ago      Exited              busybox                                0                   bfb134b27d558       busybox                                                 default
	c5fb9f28eccc3       52546a367cc9e       21 minutes ago      Exited              coredns                                0                   189caf062373f       coredns-66bc5c9577-4csbt                                kube-system
	dfe3b60326d13       36eef8e07bdd6       21 minutes ago      Exited              kube-proxy                             0                   d24824eabb273       kube-proxy-j49gn                                        kube-system
	b1029f222f9bf       aec12dadf56dd       21 minutes ago      Exited              kube-scheduler                         0                   cbbddc552a8fb       kube-scheduler-embed-certs-832734                       kube-system
	08a7af5b4c31b       a3e246e9556e9       21 minutes ago      Exited              etcd                                   0                   841ebeae13edb       etcd-embed-certs-832734                                 kube-system
	d9f3752c9cb6f       5826b25d990d7       21 minutes ago      Exited              kube-controller-manager                0                   e7f7c3dcd1b64       kube-controller-manager-embed-certs-832734              kube-system
	fa3f43f32d054       aa27095f56193       21 minutes ago      Exited              kube-apiserver                         0                   8d0ccb0e0e4aa       kube-apiserver-embed-certs-832734                       kube-system
	
	
	==> containerd <==
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.280279962Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda9f5c75f-441e-47fa-9e9a-e7720a9da989/bf9b8c16fb0f9e189114e750adde7d419cb0dfaa4ff8f92fd8aba24449dee8d6/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.280985806Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2af670ae-dcc8-4da1-87cc-c1c3a8588ee0/633ebe42c481f61adb312abdccb0ac35f4bc0f5b69e714c39223515087903512/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.282325641Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poddba889cc-f53c-47fe-ae78-cb48e17b1acb/9cb8a6c95457431429dd54a908e9cb7a9c7dd5256dcae18d99d7e3d2fb0f22b2/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.283149701Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod742c8d21-619e-4ced-af0f-72f096b866e6/6a1898be03e51c5d313e62713bcc4c9aeaaa84b8943addcc4cde5fe7c086b72e/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.283913905Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf9de80f2-143f-4f76-95c5-4ecfc46fdd1c/a8e8ce1f347b77e7ad8788a561119d579e0b72751ead05097ce5a0e60cbed4ca/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.284782405Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd9d18cd1-0e5d-48d7-a240-8dfe94ebe90b/4a8a4c08101504a74628dbbae888596631f3647072434cbfae4baf25f7a85130/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.285841426Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod1c489208-b4ab-4f27-b914-d4930d027443/e1e5b294ce0f74f6a3ec3a5cdde7b2d1ba619235fbf1d30702b061acf1d8ba8e/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.286910572Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod0dbc166e73ceb9ece62835f572ea5535/cc6fed85dd6b5fb4d8e3de856fd8ad48a3fb14ea5f01bb1a92f6abd126e0857a/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.287970210Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3c76a61c528d30b40219645dcc0b5583/c4fe189224bd9d23a20f2ea005344414664cba5082381682e72e83376eda78a8/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.288837599Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podefd499d3-fc07-4168-a175-0bee365b79f1/a22c8439567b6522d327858bdb7e780937b3476aba6b99efe142d2e68f041b48/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.289880501Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod1da2ff6b-f366-4fcb-9aff-6f252b564072/772d872ebeddd80f97840072fc41e94b74b5a9151161d86fd99199e2350f7cac/hugetlb.2MB.events\""
	Dec 19 03:54:44 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:44.291306144Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod914ccaafe35f5d66310b2948bacbdd6b/ecf7299638b47a87a73b63a8a145b6d5d7a55a4ec2f83e8f2cf6517b605575ee/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.312455054Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod0dbc166e73ceb9ece62835f572ea5535/cc6fed85dd6b5fb4d8e3de856fd8ad48a3fb14ea5f01bb1a92f6abd126e0857a/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.313525481Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3c76a61c528d30b40219645dcc0b5583/c4fe189224bd9d23a20f2ea005344414664cba5082381682e72e83376eda78a8/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.314374304Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/podefd499d3-fc07-4168-a175-0bee365b79f1/a22c8439567b6522d327858bdb7e780937b3476aba6b99efe142d2e68f041b48/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.316480030Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod1da2ff6b-f366-4fcb-9aff-6f252b564072/772d872ebeddd80f97840072fc41e94b74b5a9151161d86fd99199e2350f7cac/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.317751320Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod914ccaafe35f5d66310b2948bacbdd6b/ecf7299638b47a87a73b63a8a145b6d5d7a55a4ec2f83e8f2cf6517b605575ee/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.319047413Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod6b0e0afa2f4a6a7cf649b449bcc0d1b8/376bae94b419b9be5bfcc2679b4605fcf724678ed94fcf6a02943ed3e2d9f50b/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.320762715Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda9f5c75f-441e-47fa-9e9a-e7720a9da989/bf9b8c16fb0f9e189114e750adde7d419cb0dfaa4ff8f92fd8aba24449dee8d6/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.322605639Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2af670ae-dcc8-4da1-87cc-c1c3a8588ee0/633ebe42c481f61adb312abdccb0ac35f4bc0f5b69e714c39223515087903512/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.324389060Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poddba889cc-f53c-47fe-ae78-cb48e17b1acb/9cb8a6c95457431429dd54a908e9cb7a9c7dd5256dcae18d99d7e3d2fb0f22b2/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.325391938Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod742c8d21-619e-4ced-af0f-72f096b866e6/6a1898be03e51c5d313e62713bcc4c9aeaaa84b8943addcc4cde5fe7c086b72e/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.326829417Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podf9de80f2-143f-4f76-95c5-4ecfc46fdd1c/a8e8ce1f347b77e7ad8788a561119d579e0b72751ead05097ce5a0e60cbed4ca/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.332076642Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podd9d18cd1-0e5d-48d7-a240-8dfe94ebe90b/4a8a4c08101504a74628dbbae888596631f3647072434cbfae4baf25f7a85130/hugetlb.2MB.events\""
	Dec 19 03:54:54 embed-certs-832734 containerd[721]: time="2025-12-19T03:54:54.333637786Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod1c489208-b4ab-4f27-b914-d4930d027443/e1e5b294ce0f74f6a3ec3a5cdde7b2d1ba619235fbf1d30702b061acf1d8ba8e/hugetlb.2MB.events\""
	
	
	==> coredns [6a1898be03e51c5d313e62713bcc4c9aeaaa84b8943addcc4cde5fe7c086b72e] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:55890 - 44584 "HINFO IN 8132124525573535760.3710171668199546970. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.019108501s
	
	
	==> coredns [c5fb9f28eccc3debe1e2dd42634197f5f7016a7227dd488079a1a152f607bc05] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = ecad3ac8c72227dcf0d7a418ea5051ee155dd74d241a13c4787cc61906568517b5647c8519c78ef2c6b724422ee4b03d6cfb27e9a87140163726e83184faf782
	[INFO] Reloading complete
	[INFO] 127.0.0.1:34993 - 54404 "HINFO IN 2587688579333303283.3984501632073358796. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.013651334s
	
	
	==> describe nodes <==
	Name:               embed-certs-832734
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-832734
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=embed-certs-832734
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_33_32_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:33:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-832734
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:54:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:53:04 +0000   Fri, 19 Dec 2025 03:33:26 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:53:04 +0000   Fri, 19 Dec 2025 03:33:26 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:53:04 +0000   Fri, 19 Dec 2025 03:33:26 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:53:04 +0000   Fri, 19 Dec 2025 03:36:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.83.196
	  Hostname:    embed-certs-832734
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 4e96458273e9466aaf48ea8d012fdc6b
	  System UUID:                4e964582-73e9-466a-af48-ea8d012fdc6b
	  Boot ID:                    2ae5f3b4-7267-4819-b472-419e7f256fa9
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-4csbt                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-embed-certs-832734                                  100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-embed-certs-832734                        250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-embed-certs-832734               200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-j49gn                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-embed-certs-832734                        100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-kcjq7                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-api-6549569bf5-86vvf                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-5dd694bb47-w8bnh               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-8sndd                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-hjg9p    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-z4wvm                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node embed-certs-832734 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node embed-certs-832734 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node embed-certs-832734 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node embed-certs-832734 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node embed-certs-832734 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     21m                kubelet          Node embed-certs-832734 status is now: NodeHasSufficientPID
	  Normal   NodeReady                21m                kubelet          Node embed-certs-832734 status is now: NodeReady
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   RegisteredNode           21m                node-controller  Node embed-certs-832734 event: Registered Node embed-certs-832734 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node embed-certs-832734 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node embed-certs-832734 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node embed-certs-832734 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 18m                kubelet          Node embed-certs-832734 has been rebooted, boot id: 2ae5f3b4-7267-4819-b472-419e7f256fa9
	  Normal   RegisteredNode           18m                node-controller  Node embed-certs-832734 event: Registered Node embed-certs-832734 in Controller
	
	
	==> dmesg <==
	[Dec19 03:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000007] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001667] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.005181] (rpcbind)[120]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.720014] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.088453] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.329274] kauditd_printk_skb: 133 callbacks suppressed
	[  +5.392195] kauditd_printk_skb: 140 callbacks suppressed
	[  +1.908801] kauditd_printk_skb: 255 callbacks suppressed
	[  +3.635308] kauditd_printk_skb: 59 callbacks suppressed
	[  +9.689011] kauditd_printk_skb: 177 callbacks suppressed
	[  +6.399435] kauditd_printk_skb: 27 callbacks suppressed
	[Dec19 03:37] kauditd_printk_skb: 33 callbacks suppressed
	[ +10.835811] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [08a7af5b4c31b1181858e51d510aca3efc7b8c3c067c43ad905f888e6f55c08b] <==
	{"level":"warn","ts":"2025-12-19T03:33:27.793179Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38302","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.806595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38308","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.833983Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38334","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.842641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38360","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.871036Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38386","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.885154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38400","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.900731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.923546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.935660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38452","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.963827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.971835Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38496","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:27.988308Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.011038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.046323Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38528","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.083089Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.101189Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.110643Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.131285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.157863Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:28.238500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:33:34.374744Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"304.303603ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/kube-system\" limit:1 ","response":"range_response_count:1 size:351"}
	{"level":"info","ts":"2025-12-19T03:33:34.374961Z","caller":"traceutil/trace.go:172","msg":"trace[1634756299] range","detail":"{range_begin:/registry/namespaces/kube-system; range_end:; response_count:1; response_revision:317; }","duration":"304.56404ms","start":"2025-12-19T03:33:34.070381Z","end":"2025-12-19T03:33:34.374945Z","steps":["trace[1634756299] 'range keys from in-memory index tree'  (duration: 304.050866ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:33:34.375060Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:33:34.070366Z","time spent":"304.627597ms","remote":"127.0.0.1:37760","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":1,"response size":375,"request content":"key:\"/registry/namespaces/kube-system\" limit:1 "}
	{"level":"info","ts":"2025-12-19T03:33:52.712697Z","caller":"traceutil/trace.go:172","msg":"trace[668183353] transaction","detail":"{read_only:false; response_revision:453; number_of_response:1; }","duration":"128.941779ms","start":"2025-12-19T03:33:52.583727Z","end":"2025-12-19T03:33:52.712669Z","steps":["trace[668183353] 'process raft request'  (duration: 128.814908ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:33:53.812965Z","caller":"traceutil/trace.go:172","msg":"trace[2018071080] transaction","detail":"{read_only:false; response_revision:454; number_of_response:1; }","duration":"113.257824ms","start":"2025-12-19T03:33:53.699695Z","end":"2025-12-19T03:33:53.812953Z","steps":["trace[2018071080] 'process raft request'  (duration: 113.163652ms)"],"step_count":1}
	
	
	==> etcd [376bae94b419b9be5bfcc2679b4605fcf724678ed94fcf6a02943ed3e2d9f50b] <==
	{"level":"warn","ts":"2025-12-19T03:36:59.780234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46448","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.806244Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.826525Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.863343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46524","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.889063Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.902287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.925683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46582","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:59.957714Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46604","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:37:11.199335Z","caller":"traceutil/trace.go:172","msg":"trace[250933048] transaction","detail":"{read_only:false; response_revision:870; number_of_response:1; }","duration":"216.733003ms","start":"2025-12-19T03:37:10.982585Z","end":"2025-12-19T03:37:11.199318Z","steps":["trace[250933048] 'process raft request'  (duration: 216.554031ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:37:11.205715Z","caller":"traceutil/trace.go:172","msg":"trace[158165044] transaction","detail":"{read_only:false; response_revision:871; number_of_response:1; }","duration":"216.660217ms","start":"2025-12-19T03:37:10.989038Z","end":"2025-12-19T03:37:11.205698Z","steps":["trace[158165044] 'process raft request'  (duration: 216.361597ms)"],"step_count":1}
	{"level":"info","ts":"2025-12-19T03:46:22.831467Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1115}
	{"level":"info","ts":"2025-12-19T03:46:22.869817Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1115,"took":"36.933168ms","hash":114613448,"current-db-size-bytes":4173824,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1744896,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-19T03:46:22.869876Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":114613448,"revision":1115,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:51:22.838627Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1360}
	{"level":"info","ts":"2025-12-19T03:51:22.843706Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1360,"took":"4.579786ms","hash":3317234686,"current-db-size-bytes":4173824,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":2056192,"current-db-size-in-use":"2.1 MB"}
	{"level":"info","ts":"2025-12-19T03:51:22.843767Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3317234686,"revision":1360,"compact-revision":1115}
	{"level":"info","ts":"2025-12-19T03:54:55.523823Z","caller":"traceutil/trace.go:172","msg":"trace[1699508560] linearizableReadLoop","detail":"{readStateIndex:2052; appliedIndex:2052; }","duration":"378.190919ms","start":"2025-12-19T03:54:55.145578Z","end":"2025-12-19T03:54:55.523769Z","steps":["trace[1699508560] 'read index received'  (duration: 378.181418ms)","trace[1699508560] 'applied index is now lower than readState.Index'  (duration: 8.39µs)"],"step_count":2}
	{"level":"info","ts":"2025-12-19T03:54:55.524022Z","caller":"traceutil/trace.go:172","msg":"trace[686043691] transaction","detail":"{read_only:false; response_revision:1777; number_of_response:1; }","duration":"540.470971ms","start":"2025-12-19T03:54:54.983541Z","end":"2025-12-19T03:54:55.524012Z","steps":["trace[686043691] 'process raft request'  (duration: 540.286609ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:55.524234Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"378.525125ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:55.524276Z","caller":"traceutil/trace.go:172","msg":"trace[2012489644] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1777; }","duration":"378.697071ms","start":"2025-12-19T03:54:55.145568Z","end":"2025-12-19T03:54:55.524265Z","steps":["trace[2012489644] 'agreement among raft nodes before linearized reading'  (duration: 378.49962ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:55.524317Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-12-19T03:54:54.983522Z","time spent":"540.59261ms","remote":"127.0.0.1:55428","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1103,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1776 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1030 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2025-12-19T03:54:55.524470Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"273.919229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:55.524517Z","caller":"traceutil/trace.go:172","msg":"trace[1275840724] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1777; }","duration":"273.938388ms","start":"2025-12-19T03:54:55.250544Z","end":"2025-12-19T03:54:55.524482Z","steps":["trace[1275840724] 'agreement among raft nodes before linearized reading'  (duration: 273.903944ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:55.524680Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"199.865846ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:55.524701Z","caller":"traceutil/trace.go:172","msg":"trace[1763168119] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1777; }","duration":"199.904824ms","start":"2025-12-19T03:54:55.324790Z","end":"2025-12-19T03:54:55.524695Z","steps":["trace[1763168119] 'agreement among raft nodes before linearized reading'  (duration: 199.868293ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:55:00 up 18 min,  0 users,  load average: 0.25, 0.26, 0.20
	Linux embed-certs-832734 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [ecf7299638b47a87a73b63a8a145b6d5d7a55a4ec2f83e8f2cf6517b605575ee] <==
	E1219 03:51:25.866259       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:51:25.866286       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:51:25.866335       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:51:25.867420       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:52:25.866679       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:52:25.866795       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:52:25.866813       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:52:25.867845       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:52:25.867916       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:52:25.867946       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:54:25.866961       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:54:25.867131       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:54:25.867238       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:54:25.869198       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:54:25.869240       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:54:25.869258       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-apiserver [fa3f43f32d05406bc540cafbb00dd00cd5324efa640039d9086a756b209638c1] <==
	I1219 03:33:32.206989       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:33:36.876344       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:33:37.138636       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:37.171349       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:33:37.299056       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:34:30.087473       1 conn.go:339] Error on socket receive: read tcp 192.168.83.196:8443->192.168.83.1:35574: use of closed network connection
	I1219 03:34:30.767459       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:34:30.774314       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:30.774360       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:34:30.774404       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:34:30.929449       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.107.216.145"}
	W1219 03:34:30.953894       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:30.953993       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:34:30.956165       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: Operation cannot be fulfilled on apiservices.apiregistration.k8s.io \"v1beta1.metrics.k8s.io\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	W1219 03:34:30.964201       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:34:30.964260       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-controller-manager [cc6fed85dd6b5fb4d8e3de856fd8ad48a3fb14ea5f01bb1a92f6abd126e0857a] <==
	I1219 03:48:31.254505       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:49:01.107790       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:49:01.266252       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:49:31.114112       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:49:31.275296       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:50:01.119511       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:50:01.284875       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:50:31.126638       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:50:31.296577       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:51:01.133192       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:51:01.310748       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:51:31.138841       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:51:31.325740       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:52:01.147081       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:52:01.336699       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:52:31.155508       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:52:31.349473       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:53:01.160909       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:53:01.359208       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:53:31.166729       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:53:31.373729       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:54:01.173849       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:54:01.384747       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:54:31.180864       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:54:31.393736       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [d9f3752c9cb6fc42c4c6a525ab0da138c84562c0cd007f6ae8c924440a454275] <==
	I1219 03:33:36.179955       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:33:36.181212       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1219 03:33:36.181282       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:33:36.183728       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1219 03:33:36.184019       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:33:36.195024       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1219 03:33:36.197363       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1219 03:33:36.205714       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1219 03:33:36.206888       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1219 03:33:36.206970       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-832734"
	I1219 03:33:36.207007       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1219 03:33:36.213216       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:33:36.214316       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:33:36.224887       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 03:33:36.225965       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:33:36.226022       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:33:36.226714       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:33:36.228510       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:33:36.229268       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:33:36.229310       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:33:36.229328       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1219 03:33:36.229704       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:33:36.229741       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1219 03:33:36.230838       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:33:36.244292       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [9cb8a6c95457431429dd54a908e9cb7a9c7dd5256dcae18d99d7e3d2fb0f22b2] <==
	I1219 03:36:26.004139       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:36:26.104666       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:36:26.104724       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.196"]
	E1219 03:36:26.104838       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:36:26.163444       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:36:26.163803       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:36:26.164132       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:36:26.178458       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:36:26.179601       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:36:26.179640       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:36:26.182488       1 config.go:200] "Starting service config controller"
	I1219 03:36:26.182514       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:36:26.182535       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:36:26.182538       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:36:26.182567       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:36:26.182588       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:36:26.189282       1 config.go:309] "Starting node config controller"
	I1219 03:36:26.189307       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:36:26.283573       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:36:26.283603       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:36:26.283643       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:36:26.289975       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-proxy [dfe3b60326d13a2ff068327c17194ff77185eaf8fe59b42f7aa697f3ca2a4628] <==
	I1219 03:33:38.921610       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:33:39.024217       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:33:39.028877       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.83.196"]
	E1219 03:33:39.031260       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:33:39.108187       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:33:39.108271       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:33:39.108306       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:33:39.121122       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:33:39.121734       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:33:39.122150       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:33:39.131505       1 config.go:200] "Starting service config controller"
	I1219 03:33:39.131555       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:33:39.131605       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:33:39.131610       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:33:39.131621       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:33:39.131624       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:33:39.132614       1 config.go:309] "Starting node config controller"
	I1219 03:33:39.132655       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:33:39.132664       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:33:39.231868       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:33:39.232067       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:33:39.232479       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [b1029f222f9bfc488f8a6e38154e34404bea6c9773db003212a53269860d7d0e] <==
	E1219 03:33:29.422651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:33:29.424998       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:33:29.425310       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:33:29.425580       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:33:29.425856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:33:29.426222       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:33:29.426578       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:33:29.426908       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:33:29.432826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:33:29.432635       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1219 03:33:29.433383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:33:29.433469       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1219 03:33:29.433509       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:33:29.433540       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:33:29.433585       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:33:29.433609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:33:29.433682       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:33:29.434093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:33:29.435224       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1219 03:33:30.241084       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1219 03:33:30.271931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:33:30.411828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:33:30.421022       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:33:30.474859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1219 03:33:33.007891       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [c4fe189224bd9d23a20f2ea005344414664cba5082381682e72e83376eda78a8] <==
	I1219 03:36:22.380222       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:36:24.819473       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:36:24.819552       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:36:24.819586       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:36:24.819600       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:36:24.900843       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:36:24.900902       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:36:24.912956       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:24.913765       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:24.916931       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:36:24.917978       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1219 03:36:25.015006       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:50:16 embed-certs-832734 kubelet[1085]: E1219 03:50:16.974730    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:50:29 embed-certs-832734 kubelet[1085]: E1219 03:50:29.974381    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:50:42 embed-certs-832734 kubelet[1085]: E1219 03:50:42.973485    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:50:56 embed-certs-832734 kubelet[1085]: E1219 03:50:56.973612    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:51:07 embed-certs-832734 kubelet[1085]: E1219 03:51:07.974095    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:51:19 embed-certs-832734 kubelet[1085]: E1219 03:51:19.973976    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:51:31 embed-certs-832734 kubelet[1085]: E1219 03:51:31.973925    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:51:42 embed-certs-832734 kubelet[1085]: E1219 03:51:42.974093    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:51:55 embed-certs-832734 kubelet[1085]: E1219 03:51:55.974742    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:52:07 embed-certs-832734 kubelet[1085]: E1219 03:52:07.977534    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:52:19 embed-certs-832734 kubelet[1085]: E1219 03:52:19.975282    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:52:33 embed-certs-832734 kubelet[1085]: E1219 03:52:33.983056    1085 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:52:33 embed-certs-832734 kubelet[1085]: E1219 03:52:33.983146    1085 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:52:33 embed-certs-832734 kubelet[1085]: E1219 03:52:33.983640    1085 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-kcjq7_kube-system(3df93f50-47ae-4697-9567-9a02426c3a6c): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 19 03:52:33 embed-certs-832734 kubelet[1085]: E1219 03:52:33.983701    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:52:45 embed-certs-832734 kubelet[1085]: E1219 03:52:45.977900    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:52:56 embed-certs-832734 kubelet[1085]: E1219 03:52:56.974965    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:53:11 embed-certs-832734 kubelet[1085]: E1219 03:53:11.973365    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:53:26 embed-certs-832734 kubelet[1085]: E1219 03:53:26.973877    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:53:41 embed-certs-832734 kubelet[1085]: E1219 03:53:41.973600    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:53:55 embed-certs-832734 kubelet[1085]: E1219 03:53:55.974924    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:54:06 embed-certs-832734 kubelet[1085]: E1219 03:54:06.973671    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:54:20 embed-certs-832734 kubelet[1085]: E1219 03:54:20.974541    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:54:34 embed-certs-832734 kubelet[1085]: E1219 03:54:34.973866    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	Dec 19 03:54:46 embed-certs-832734 kubelet[1085]: E1219 03:54:46.973858    1085 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-kcjq7" podUID="3df93f50-47ae-4697-9567-9a02426c3a6c"
	
	
	==> kubernetes-dashboard [4a8a4c08101504a74628dbbae888596631f3647072434cbfae4baf25f7a85130] <==
	I1219 03:37:00.657731       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:37:00.657843       1 init.go:49] Using in-cluster config
	I1219 03:37:00.658266       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [633ebe42c481f61adb312abdccb0ac35f4bc0f5b69e714c39223515087903512] <==
	I1219 03:36:53.462967       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:36:53.464557       1 init.go:48] Using in-cluster config
	I1219 03:36:53.466753       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [a8e8ce1f347b77e7ad8788a561119d579e0b72751ead05097ce5a0e60cbed4ca] <==
	I1219 03:36:57.162259       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:36:57.162396       1 init.go:49] Using in-cluster config
	I1219 03:36:57.162866       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:36:57.162911       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:36:57.162921       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:36:57.162930       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:36:57.251593       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:36:57.251676       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:36:57.266979       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	E1219 03:36:57.267554       1 manager.go:96] Metric client health check failed: the server is currently unable to handle the request (get services kubernetes-dashboard-metrics-scraper). Retrying in 30 seconds.
	I1219 03:37:27.274131       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [e1e5b294ce0f74f6a3ec3a5cdde7b2d1ba619235fbf1d30702b061acf1d8ba8e] <==
	10.244.0.1 - - [19/Dec/2025:03:52:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:52:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:52:27 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:52:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:52:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:52:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:52:57 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:27 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:57 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:54:07 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:17 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:27 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:27 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:54:37 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:47 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:57 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:57 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	E1219 03:53:04.393758       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:54:04.390302       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	
	
	==> storage-provisioner [42e0e0df29296b9adf1cb69856162d4fe721dd68ba40736b43c6c25859de7cb4] <==
	I1219 03:36:25.766118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:36:55.796277       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a22c8439567b6522d327858bdb7e780937b3476aba6b99efe142d2e68f041b48] <==
	W1219 03:54:34.871116       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:36.875862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:36.882766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:38.887231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:38.893398       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:40.897935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:40.905727       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:42.910301       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:42.918051       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:44.921721       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:44.926972       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:46.930616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:46.939706       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:48.945102       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:48.954365       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:50.958713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:50.963612       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:52.969409       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:52.977312       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:54.980919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:55.527583       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:57.531904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:57.542133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:59.546734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:54:59.557205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832734 -n embed-certs-832734
helpers_test.go:270: (dbg) Run:  kubectl --context embed-certs-832734 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-kcjq7
helpers_test.go:283: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context embed-certs-832734 describe pod metrics-server-746fcd58dc-kcjq7
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context embed-certs-832734 describe pod metrics-server-746fcd58dc-kcjq7: exit status 1 (75.938803ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-kcjq7" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context embed-certs-832734 describe pod metrics-server-746fcd58dc-kcjq7: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (542.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.4s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1219 03:46:37.985663    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:46:39.042633    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:47:00.583813    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:47:08.866068    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:47:57.747813    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:48:02.090048    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:49:44.847399    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:49:49.963129    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:49:54.464199    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:50:21.169502    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:50:34.380257    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:13.008737    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:17.510713    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:37.986105    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:39.042089    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:44.212497    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:51:57.431188    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:00.584341    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:08.866256    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:52:57.748032    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/bridge-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:01.030882    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:23.631604    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:53:31.911508    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:285: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
start_stop_delete_test.go:285: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2025-12-19 03:55:33.972977941 +0000 UTC m=+5420.787606760
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-382606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: exit status 1 (57.421014ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "dashboard-metrics-scraper" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-382606 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": exit status 1
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:248: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
helpers_test.go:253: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-382606 logs -n 25
E1219 03:55:34.379571    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-382606 logs -n 25: (1.521955061s)
helpers_test.go:261: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────
────────────────┐
	│ COMMAND │                                                                                                                          ARGS                                                                                                                          │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────
────────────────┤
	│ addons  │ enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                           │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1                                                                                       │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                          │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:35 UTC │
	│ start   │ -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                                             │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:35 UTC │ 19 Dec 25 03:36 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:36 UTC │
	│ start   │ -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3                                                                           │ default-k8s-diff-port-382606 │ jenkins │ v1.37.0 │ 19 Dec 25 03:36 UTC │ 19 Dec 25 03:37 UTC │
	│ image   │ old-k8s-version-638861 image list --format=json                                                                                                                                                                                                        │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ pause   │ -p old-k8s-version-638861 --alsologtostderr -v=1                                                                                                                                                                                                       │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ unpause │ -p old-k8s-version-638861 --alsologtostderr -v=1                                                                                                                                                                                                       │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p old-k8s-version-638861                                                                                                                                                                                                                              │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p old-k8s-version-638861                                                                                                                                                                                                                              │ old-k8s-version-638861       │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ start   │ -p newest-cni-979595 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1 │ newest-cni-979595            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:55 UTC │
	│ image   │ no-preload-728806 image list --format=json                                                                                                                                                                                                             │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ pause   │ -p no-preload-728806 --alsologtostderr -v=1                                                                                                                                                                                                            │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ unpause │ -p no-preload-728806 --alsologtostderr -v=1                                                                                                                                                                                                            │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p no-preload-728806                                                                                                                                                                                                                                   │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p no-preload-728806                                                                                                                                                                                                                                   │ no-preload-728806            │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ delete  │ -p guest-269272                                                                                                                                                                                                                                        │ guest-269272                 │ jenkins │ v1.37.0 │ 19 Dec 25 03:54 UTC │ 19 Dec 25 03:54 UTC │
	│ image   │ embed-certs-832734 image list --format=json                                                                                                                                                                                                            │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:55 UTC │ 19 Dec 25 03:55 UTC │
	│ pause   │ -p embed-certs-832734 --alsologtostderr -v=1                                                                                                                                                                                                           │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:55 UTC │ 19 Dec 25 03:55 UTC │
	│ unpause │ -p embed-certs-832734 --alsologtostderr -v=1                                                                                                                                                                                                           │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:55 UTC │ 19 Dec 25 03:55 UTC │
	│ delete  │ -p embed-certs-832734                                                                                                                                                                                                                                  │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:55 UTC │ 19 Dec 25 03:55 UTC │
	│ delete  │ -p embed-certs-832734                                                                                                                                                                                                                                  │ embed-certs-832734           │ jenkins │ v1.37.0 │ 19 Dec 25 03:55 UTC │ 19 Dec 25 03:55 UTC │
	│ addons  │ enable metrics-server -p newest-cni-979595 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                                │ newest-cni-979595            │ jenkins │ v1.37.0 │ 19 Dec 25 03:55 UTC │ 19 Dec 25 03:55 UTC │
	│ stop    │ -p newest-cni-979595 --alsologtostderr -v=3                                                                                                                                                                                                            │ newest-cni-979595            │ jenkins │ v1.37.0 │ 19 Dec 25 03:55 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────
────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 03:54:27
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 03:54:27.192284   55963 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:54:27.192554   55963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:27.192564   55963 out.go:374] Setting ErrFile to fd 2...
	I1219 03:54:27.192569   55963 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:54:27.192814   55963 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:54:27.193330   55963 out.go:368] Setting JSON to false
	I1219 03:54:27.194272   55963 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":5806,"bootTime":1766110661,"procs":192,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:54:27.194323   55963 start.go:143] virtualization: kvm guest
	I1219 03:54:27.196187   55963 out.go:179] * [newest-cni-979595] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:54:27.197745   55963 notify.go:221] Checking for updates...
	I1219 03:54:27.197758   55963 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:54:27.198924   55963 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:54:27.200214   55963 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:54:27.201254   55963 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:54:27.202292   55963 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:54:27.203305   55963 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:54:27.205091   55963 config.go:182] Loaded profile config "default-k8s-diff-port-382606": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:54:27.205251   55963 config.go:182] Loaded profile config "embed-certs-832734": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:54:27.205388   55963 config.go:182] Loaded profile config "guest-269272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1219 03:54:27.205524   55963 config.go:182] Loaded profile config "no-preload-728806": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:54:27.205679   55963 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:54:27.243710   55963 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 03:54:27.244920   55963 start.go:309] selected driver: kvm2
	I1219 03:54:27.244948   55963 start.go:928] validating driver "kvm2" against <nil>
	I1219 03:54:27.244979   55963 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:54:27.245942   55963 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1219 03:54:27.245993   55963 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1219 03:54:27.246255   55963 start_flags.go:1012] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:54:27.246287   55963 cni.go:84] Creating CNI manager for ""
	I1219 03:54:27.246341   55963 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:54:27.246351   55963 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 03:54:27.246403   55963 start.go:353] cluster config:
	{Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:27.246526   55963 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 03:54:27.247860   55963 out.go:179] * Starting "newest-cni-979595" primary control-plane node in "newest-cni-979595" cluster
	I1219 03:54:27.248846   55963 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:54:27.248875   55963 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	I1219 03:54:27.248883   55963 cache.go:65] Caching tarball of preloaded images
	I1219 03:54:27.248960   55963 preload.go:238] Found /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I1219 03:54:27.248973   55963 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-rc.1 on containerd
	I1219 03:54:27.249080   55963 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json ...
	I1219 03:54:27.249100   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json: {Name:mk44e3bf87006423b68d2f8f5d5aa41ebe28e61a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:27.249264   55963 start.go:360] acquireMachinesLock for newest-cni-979595: {Name:mkbf0ff4f4743f75373609a52c13bcf346114394 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I1219 03:54:27.249299   55963 start.go:364] duration metric: took 18.947µs to acquireMachinesLock for "newest-cni-979595"
	I1219 03:54:27.249322   55963 start.go:93] Provisioning new machine with config: &{Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:54:27.249405   55963 start.go:125] createHost starting for "" (driver="kvm2")
	I1219 03:54:27.251294   55963 out.go:252] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I1219 03:54:27.251452   55963 start.go:159] libmachine.API.Create for "newest-cni-979595" (driver="kvm2")
	I1219 03:54:27.251487   55963 client.go:173] LocalClient.Create starting
	I1219 03:54:27.251557   55963 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem
	I1219 03:54:27.251601   55963 main.go:144] libmachine: Decoding PEM data...
	I1219 03:54:27.251630   55963 main.go:144] libmachine: Parsing certificate...
	I1219 03:54:27.251691   55963 main.go:144] libmachine: Reading certificate data from /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem
	I1219 03:54:27.251719   55963 main.go:144] libmachine: Decoding PEM data...
	I1219 03:54:27.251742   55963 main.go:144] libmachine: Parsing certificate...
	I1219 03:54:27.252120   55963 main.go:144] libmachine: creating domain...
	I1219 03:54:27.252133   55963 main.go:144] libmachine: creating network...
	I1219 03:54:27.253367   55963 main.go:144] libmachine: found existing default network
	I1219 03:54:27.253569   55963 main.go:144] libmachine: <network connections='4'>
	  <name>default</name>
	  <uuid>c61344c2-dba2-46dd-a21a-34776d235985</uuid>
	  <forward mode='nat'>
	    <nat>
	      <port start='1024' end='65535'/>
	    </nat>
	  </forward>
	  <bridge name='virbr0' stp='on' delay='0'/>
	  <mac address='52:54:00:10:a2:1d'/>
	  <ip address='192.168.122.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.122.2' end='192.168.122.254'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 03:54:27.254314   55963 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:d2:9b:dc} reservation:<nil>}
	I1219 03:54:27.254912   55963 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:3a:87:10} reservation:<nil>}
	I1219 03:54:27.255808   55963 network.go:206] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d198f0}
	I1219 03:54:27.255889   55963 main.go:144] libmachine: defining private network:
	
	<network>
	  <name>mk-newest-cni-979595</name>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 03:54:27.261039   55963 main.go:144] libmachine: creating private network mk-newest-cni-979595 192.168.61.0/24...
	I1219 03:54:27.333631   55963 main.go:144] libmachine: private network mk-newest-cni-979595 192.168.61.0/24 created
	I1219 03:54:27.333909   55963 main.go:144] libmachine: <network>
	  <name>mk-newest-cni-979595</name>
	  <uuid>44f67358-f9a8-4ac6-8075-f452ded8ea4a</uuid>
	  <bridge name='virbr3' stp='on' delay='0'/>
	  <mac address='52:54:00:51:d9:16'/>
	  <dns enable='no'/>
	  <ip address='192.168.61.1' netmask='255.255.255.0'>
	    <dhcp>
	      <range start='192.168.61.2' end='192.168.61.253'/>
	    </dhcp>
	  </ip>
	</network>
	
	I1219 03:54:27.333935   55963 main.go:144] libmachine: setting up store path in /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595 ...
	I1219 03:54:27.333964   55963 main.go:144] libmachine: building disk image from file:///home/jenkins/minikube-integration/22230-5003/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1219 03:54:27.333978   55963 common.go:152] Making disk image using store path: /home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:54:27.334111   55963 main.go:144] libmachine: Downloading /home/jenkins/minikube-integration/22230-5003/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/22230-5003/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso...
	I1219 03:54:27.614812   55963 common.go:159] Creating ssh key: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa...
	I1219 03:54:27.814741   55963 common.go:165] Creating raw disk image: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/newest-cni-979595.rawdisk...
	I1219 03:54:27.814775   55963 main.go:144] libmachine: Writing magic tar header
	I1219 03:54:27.814810   55963 main.go:144] libmachine: Writing SSH key tar header
	I1219 03:54:27.814886   55963 common.go:179] Fixing permissions on /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595 ...
	I1219 03:54:27.814972   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595
	I1219 03:54:27.815003   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595 (perms=drwx------)
	I1219 03:54:27.815031   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003/.minikube/machines
	I1219 03:54:27.815041   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003/.minikube/machines (perms=drwxr-xr-x)
	I1219 03:54:27.815055   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:54:27.815063   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003/.minikube (perms=drwxr-xr-x)
	I1219 03:54:27.815077   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration/22230-5003
	I1219 03:54:27.815088   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration/22230-5003 (perms=drwxrwxr-x)
	I1219 03:54:27.815101   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins/minikube-integration
	I1219 03:54:27.815122   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I1219 03:54:27.815135   55963 main.go:144] libmachine: checking permissions on dir: /home/jenkins
	I1219 03:54:27.815155   55963 main.go:144] libmachine: setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I1219 03:54:27.815165   55963 main.go:144] libmachine: checking permissions on dir: /home
	I1219 03:54:27.815180   55963 main.go:144] libmachine: skipping /home - not owner
	I1219 03:54:27.815190   55963 main.go:144] libmachine: defining domain...
	I1219 03:54:27.816307   55963 main.go:144] libmachine: defining domain using XML: 
	<domain type='kvm'>
	  <name>newest-cni-979595</name>
	  <memory unit='MiB'>3072</memory>
	  <vcpu>2</vcpu>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough'>
	  </cpu>
	  <os>
	    <type>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <devices>
	    <disk type='file' device='cdrom'>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' cache='default' io='threads' />
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/newest-cni-979595.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	    </disk>
	    <interface type='network'>
	      <source network='mk-newest-cni-979595'/>
	      <model type='virtio'/>
	    </interface>
	    <interface type='network'>
	      <source network='default'/>
	      <model type='virtio'/>
	    </interface>
	    <serial type='pty'>
	      <target port='0'/>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:27.825266   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:aa:ba:f9 in network default
	I1219 03:54:27.825936   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:27.825953   55963 main.go:144] libmachine: starting domain...
	I1219 03:54:27.825958   55963 main.go:144] libmachine: ensuring networks are active...
	I1219 03:54:27.826782   55963 main.go:144] libmachine: Ensuring network default is active
	I1219 03:54:27.827317   55963 main.go:144] libmachine: Ensuring network mk-newest-cni-979595 is active
	I1219 03:54:27.828166   55963 main.go:144] libmachine: getting domain XML...
	I1219 03:54:27.829724   55963 main.go:144] libmachine: starting domain XML:
	<domain type='kvm'>
	  <name>newest-cni-979595</name>
	  <uuid>8e86e3db-9966-4ac4-b938-c6eab04a469d</uuid>
	  <memory unit='KiB'>3145728</memory>
	  <currentMemory unit='KiB'>3145728</currentMemory>
	  <vcpu placement='static'>2</vcpu>
	  <os>
	    <type arch='x86_64' machine='pc-i440fx-jammy'>hvm</type>
	    <boot dev='cdrom'/>
	    <boot dev='hd'/>
	    <bootmenu enable='no'/>
	  </os>
	  <features>
	    <acpi/>
	    <apic/>
	    <pae/>
	  </features>
	  <cpu mode='host-passthrough' check='none' migratable='on'/>
	  <clock offset='utc'/>
	  <on_poweroff>destroy</on_poweroff>
	  <on_reboot>restart</on_reboot>
	  <on_crash>destroy</on_crash>
	  <devices>
	    <emulator>/usr/bin/qemu-system-x86_64</emulator>
	    <disk type='file' device='cdrom'>
	      <driver name='qemu' type='raw'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/boot2docker.iso'/>
	      <target dev='hdc' bus='scsi'/>
	      <readonly/>
	      <address type='drive' controller='0' bus='0' target='0' unit='2'/>
	    </disk>
	    <disk type='file' device='disk'>
	      <driver name='qemu' type='raw' io='threads'/>
	      <source file='/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/newest-cni-979595.rawdisk'/>
	      <target dev='hda' bus='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
	    </disk>
	    <controller type='usb' index='0' model='piix3-uhci'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
	    </controller>
	    <controller type='pci' index='0' model='pci-root'/>
	    <controller type='scsi' index='0' model='lsilogic'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
	    </controller>
	    <interface type='network'>
	      <mac address='52:54:00:e5:ac:6b'/>
	      <source network='mk-newest-cni-979595'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
	    </interface>
	    <interface type='network'>
	      <mac address='52:54:00:aa:ba:f9'/>
	      <source network='default'/>
	      <model type='virtio'/>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
	    </interface>
	    <serial type='pty'>
	      <target type='isa-serial' port='0'>
	        <model name='isa-serial'/>
	      </target>
	    </serial>
	    <console type='pty'>
	      <target type='serial' port='0'/>
	    </console>
	    <input type='mouse' bus='ps2'/>
	    <input type='keyboard' bus='ps2'/>
	    <audio id='1' type='none'/>
	    <memballoon model='virtio'>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
	    </memballoon>
	    <rng model='virtio'>
	      <backend model='random'>/dev/random</backend>
	      <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/>
	    </rng>
	  </devices>
	</domain>
	
	I1219 03:54:29.311758   55963 main.go:144] libmachine: waiting for domain to start...
	I1219 03:54:29.313669   55963 main.go:144] libmachine: domain is now running
	I1219 03:54:29.313692   55963 main.go:144] libmachine: waiting for IP...
	I1219 03:54:29.314479   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:29.315133   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:29.315163   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:29.315616   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:29.315661   55963 retry.go:31] will retry after 195.944022ms: waiting for domain to come up
	I1219 03:54:29.513308   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:29.514098   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:29.514115   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:29.514495   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:29.514524   55963 retry.go:31] will retry after 336.104596ms: waiting for domain to come up
	I1219 03:54:29.852162   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:29.852921   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:29.852943   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:29.853568   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:29.853604   55963 retry.go:31] will retry after 471.020747ms: waiting for domain to come up
	I1219 03:54:30.326166   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:30.326866   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:30.326882   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:30.327232   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:30.327266   55963 retry.go:31] will retry after 592.062409ms: waiting for domain to come up
	I1219 03:54:30.921138   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:30.921852   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:30.921871   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:30.922302   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:30.922347   55963 retry.go:31] will retry after 705.614256ms: waiting for domain to come up
	I1219 03:54:31.629278   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:31.630231   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:31.630247   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:31.630530   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:31.630559   55963 retry.go:31] will retry after 891.599258ms: waiting for domain to come up
	I1219 03:54:32.524303   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:32.525099   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:32.525123   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:32.525592   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:32.525630   55963 retry.go:31] will retry after 1.059476047s: waiting for domain to come up
	I1219 03:54:33.586696   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:33.587479   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:33.587495   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:33.587854   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:33.587889   55963 retry.go:31] will retry after 1.052217642s: waiting for domain to come up
	I1219 03:54:34.642148   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:34.642841   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:34.642859   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:34.643358   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:34.643396   55963 retry.go:31] will retry after 1.614190229s: waiting for domain to come up
	I1219 03:54:36.260579   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:36.261531   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:36.261553   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:36.261978   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:36.262041   55963 retry.go:31] will retry after 1.822049353s: waiting for domain to come up
	I1219 03:54:38.085927   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:38.086803   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:38.086849   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:38.087298   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:38.087347   55963 retry.go:31] will retry after 2.017219155s: waiting for domain to come up
	I1219 03:54:40.107418   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:40.108157   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:40.108174   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:40.108505   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:40.108542   55963 retry.go:31] will retry after 3.223669681s: waiting for domain to come up
	I1219 03:54:43.333850   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:43.334542   55963 main.go:144] libmachine: no network interface addresses found for domain newest-cni-979595 (source=lease)
	I1219 03:54:43.334562   55963 main.go:144] libmachine: trying to list again with source=arp
	I1219 03:54:43.334851   55963 main.go:144] libmachine: unable to find current IP address of domain newest-cni-979595 in network mk-newest-cni-979595 (interfaces detected: [])
	I1219 03:54:43.334884   55963 retry.go:31] will retry after 4.229098773s: waiting for domain to come up
	I1219 03:54:47.565185   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.565919   55963 main.go:144] libmachine: domain newest-cni-979595 has current primary IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.565937   55963 main.go:144] libmachine: found domain IP: 192.168.61.160
	I1219 03:54:47.565947   55963 main.go:144] libmachine: reserving static IP address...
	I1219 03:54:47.566484   55963 main.go:144] libmachine: unable to find host DHCP lease matching {name: "newest-cni-979595", mac: "52:54:00:e5:ac:6b", ip: "192.168.61.160"} in network mk-newest-cni-979595
	I1219 03:54:47.797312   55963 main.go:144] libmachine: reserved static IP address 192.168.61.160 for domain newest-cni-979595
	I1219 03:54:47.797333   55963 main.go:144] libmachine: waiting for SSH...
	I1219 03:54:47.797340   55963 main.go:144] libmachine: Getting to WaitForSSH function...
	I1219 03:54:47.800771   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.801232   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:minikube Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:47.801262   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.801457   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:47.801747   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:47.801763   55963 main.go:144] libmachine: About to run SSH command:
	exit 0
	I1219 03:54:47.919892   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:47.920293   55963 main.go:144] libmachine: domain creation complete
	I1219 03:54:47.922141   55963 machine.go:94] provisionDockerMachine start ...
	I1219 03:54:47.924796   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.925277   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:47.925320   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:47.925478   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:47.925680   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:47.925691   55963 main.go:144] libmachine: About to run SSH command:
	hostname
	I1219 03:54:48.036148   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: minikube
	
	I1219 03:54:48.036176   55963 buildroot.go:166] provisioning hostname "newest-cni-979595"
	I1219 03:54:48.038927   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.039370   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.039395   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.039564   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:48.039817   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:48.039830   55963 main.go:144] libmachine: About to run SSH command:
	sudo hostname newest-cni-979595 && echo "newest-cni-979595" | sudo tee /etc/hostname
	I1219 03:54:48.164124   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: newest-cni-979595
	
	I1219 03:54:48.167425   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.168000   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.168042   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.168230   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:48.168486   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:48.168511   55963 main.go:144] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-979595' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-979595/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-979595' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1219 03:54:48.284626   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: 
	I1219 03:54:48.284663   55963 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/22230-5003/.minikube CaCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/22230-5003/.minikube}
	I1219 03:54:48.284702   55963 buildroot.go:174] setting up certificates
	I1219 03:54:48.284719   55963 provision.go:84] configureAuth start
	I1219 03:54:48.288127   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.288496   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.288517   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.291563   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.292579   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.292607   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.292754   55963 provision.go:143] copyHostCerts
	I1219 03:54:48.292800   55963 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem, removing ...
	I1219 03:54:48.292817   55963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem
	I1219 03:54:48.292883   55963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/cert.pem (1123 bytes)
	I1219 03:54:48.293066   55963 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem, removing ...
	I1219 03:54:48.293078   55963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem
	I1219 03:54:48.293116   55963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/key.pem (1675 bytes)
	I1219 03:54:48.293192   55963 exec_runner.go:144] found /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem, removing ...
	I1219 03:54:48.293200   55963 exec_runner.go:203] rm: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem
	I1219 03:54:48.293223   55963 exec_runner.go:151] cp: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/22230-5003/.minikube/ca.pem (1082 bytes)
	I1219 03:54:48.293284   55963 provision.go:117] generating server cert: /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem org=jenkins.newest-cni-979595 san=[127.0.0.1 192.168.61.160 localhost minikube newest-cni-979595]
	I1219 03:54:48.375174   55963 provision.go:177] copyRemoteCerts
	I1219 03:54:48.375228   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1219 03:54:48.377509   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.377839   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.377860   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.378000   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.461874   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1219 03:54:48.493405   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1219 03:54:48.522314   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1219 03:54:48.551219   55963 provision.go:87] duration metric: took 266.486029ms to configureAuth
	I1219 03:54:48.551244   55963 buildroot.go:189] setting minikube options for container-runtime
	I1219 03:54:48.551410   55963 config.go:182] Loaded profile config "newest-cni-979595": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:54:48.551423   55963 machine.go:97] duration metric: took 629.260739ms to provisionDockerMachine
	I1219 03:54:48.551440   55963 client.go:176] duration metric: took 21.299942263s to LocalClient.Create
	I1219 03:54:48.551465   55963 start.go:167] duration metric: took 21.300013394s to libmachine.API.Create "newest-cni-979595"
	I1219 03:54:48.551477   55963 start.go:293] postStartSetup for "newest-cni-979595" (driver="kvm2")
	I1219 03:54:48.551495   55963 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1219 03:54:48.551551   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1219 03:54:48.554390   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.554822   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.554852   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.554987   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.638536   55963 ssh_runner.go:195] Run: cat /etc/os-release
	I1219 03:54:48.643902   55963 info.go:137] Remote host: Buildroot 2025.02
	I1219 03:54:48.643935   55963 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/addons for local assets ...
	I1219 03:54:48.644027   55963 filesync.go:126] Scanning /home/jenkins/minikube-integration/22230-5003/.minikube/files for local assets ...
	I1219 03:54:48.644120   55963 filesync.go:149] local asset: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem -> 89782.pem in /etc/ssl/certs
	I1219 03:54:48.644229   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1219 03:54:48.656123   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:54:48.685976   55963 start.go:296] duration metric: took 134.484496ms for postStartSetup
	I1219 03:54:48.689208   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.689622   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.689654   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.689877   55963 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/config.json ...
	I1219 03:54:48.690063   55963 start.go:128] duration metric: took 21.440642543s to createHost
	I1219 03:54:48.692138   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.692449   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.692470   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.692600   55963 main.go:144] libmachine: Using SSH client type: native
	I1219 03:54:48.692792   55963 main.go:144] libmachine: &{{{<nil> 0 [] [] []} docker [0x84dd20] 0x8509c0 <nil>  [] 0s} 192.168.61.160 22 <nil> <nil>}
	I1219 03:54:48.692802   55963 main.go:144] libmachine: About to run SSH command:
	date +%s.%N
	I1219 03:54:48.795176   55963 main.go:144] libmachine: SSH cmd err, output: <nil>: 1766116488.758832895
	
	I1219 03:54:48.795200   55963 fix.go:216] guest clock: 1766116488.758832895
	I1219 03:54:48.795210   55963 fix.go:229] Guest: 2025-12-19 03:54:48.758832895 +0000 UTC Remote: 2025-12-19 03:54:48.690082937 +0000 UTC m=+21.547019868 (delta=68.749958ms)
	I1219 03:54:48.795228   55963 fix.go:200] guest clock delta is within tolerance: 68.749958ms
	I1219 03:54:48.795233   55963 start.go:83] releasing machines lock for "newest-cni-979595", held for 21.545923375s
	I1219 03:54:48.798396   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.798826   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.798854   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.799347   55963 ssh_runner.go:195] Run: cat /version.json
	I1219 03:54:48.799418   55963 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1219 03:54:48.802630   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.802754   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.803071   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.803102   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.803139   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:48.803169   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:48.803248   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.803496   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:54:48.880163   55963 ssh_runner.go:195] Run: systemctl --version
	I1219 03:54:48.907995   55963 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1219 03:54:48.915513   55963 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1219 03:54:48.915568   55963 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1219 03:54:48.937521   55963 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1219 03:54:48.937541   55963 start.go:496] detecting cgroup driver to use...
	I1219 03:54:48.937598   55963 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I1219 03:54:48.975395   55963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1219 03:54:48.991809   55963 docker.go:218] disabling cri-docker service (if available) ...
	I1219 03:54:48.991870   55963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1219 03:54:49.008941   55963 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1219 03:54:49.024827   55963 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1219 03:54:49.178964   55963 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1219 03:54:49.388888   55963 docker.go:234] disabling docker service ...
	I1219 03:54:49.388978   55963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1219 03:54:49.407543   55963 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1219 03:54:49.422806   55963 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1219 03:54:49.586233   55963 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1219 03:54:49.732662   55963 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1219 03:54:49.752396   55963 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1219 03:54:49.774745   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1219 03:54:49.786990   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1219 03:54:49.799774   55963 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1219 03:54:49.799853   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1219 03:54:49.811857   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:54:49.826340   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1219 03:54:49.842128   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1219 03:54:49.853645   55963 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1219 03:54:49.866523   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1219 03:54:49.877882   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1219 03:54:49.889916   55963 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1219 03:54:49.901808   55963 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1219 03:54:49.911960   55963 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1219 03:54:49.911999   55963 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1219 03:54:49.932537   55963 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1219 03:54:49.943443   55963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:50.082414   55963 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:54:50.122071   55963 start.go:543] Will wait 60s for socket path /run/containerd/containerd.sock
	I1219 03:54:50.122138   55963 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:54:50.128791   55963 retry.go:31] will retry after 1.123108402s: stat /run/containerd/containerd.sock: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/run/containerd/containerd.sock': No such file or directory
	I1219 03:54:51.252775   55963 ssh_runner.go:195] Run: stat /run/containerd/containerd.sock
	I1219 03:54:51.259802   55963 start.go:564] Will wait 60s for crictl version
	I1219 03:54:51.259877   55963 ssh_runner.go:195] Run: which crictl
	I1219 03:54:51.264736   55963 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1219 03:54:51.298280   55963 start.go:580] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  v2.2.0
	RuntimeApiVersion:  v1
	I1219 03:54:51.298358   55963 ssh_runner.go:195] Run: containerd --version
	I1219 03:54:51.322336   55963 ssh_runner.go:195] Run: containerd --version
	I1219 03:54:51.345443   55963 out.go:179] * Preparing Kubernetes v1.35.0-rc.1 on containerd 2.2.0 ...
	I1219 03:54:51.349510   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:51.350031   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:54:51.350072   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:54:51.350286   55963 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I1219 03:54:51.354465   55963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:51.370765   55963 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1219 03:54:51.371964   55963 kubeadm.go:884] updating cluster {Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.160 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP:
MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1219 03:54:51.372098   55963 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 03:54:51.372148   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:51.402179   55963 containerd.go:623] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.35.0-rc.1". assuming images are not preloaded.
	I1219 03:54:51.402289   55963 ssh_runner.go:195] Run: which lz4
	I1219 03:54:51.406369   55963 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1219 03:54:51.411220   55963 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1219 03:54:51.411241   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (340150867 bytes)
	I1219 03:54:52.713413   55963 containerd.go:563] duration metric: took 1.307076252s to copy over tarball
	I1219 03:54:52.713484   55963 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1219 03:54:54.157255   55963 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (1.443741815s)
	I1219 03:54:54.157281   55963 containerd.go:570] duration metric: took 1.443843414s to extract the tarball
	I1219 03:54:54.157288   55963 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1219 03:54:54.194835   55963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:54.337054   55963 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1219 03:54:54.387145   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:54.412639   55963 retry.go:31] will retry after 353.788959ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:54Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:54.767494   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:54.799284   55963 retry.go:31] will retry after 322.230976ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:54Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:55.121812   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:55.149569   55963 retry.go:31] will retry after 331.901788ms: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:55Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:55.482241   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:55.510785   55963 retry.go:31] will retry after 1.176527515s: sudo crictl images --output json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-12-19T03:54:55Z" level=fatal msg="validate service connection: validate CRI v1 image API for endpoint \"unix:///run/containerd/containerd.sock\": rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /run/containerd/containerd.sock: connect: no such file or directory\""
	I1219 03:54:56.688138   55963 ssh_runner.go:195] Run: sudo crictl images --output json
	I1219 03:54:56.722080   55963 containerd.go:627] all images are preloaded for containerd runtime.
	I1219 03:54:56.722111   55963 cache_images.go:86] Images are preloaded, skipping loading
	I1219 03:54:56.722121   55963 kubeadm.go:935] updating node { 192.168.61.160 8443 v1.35.0-rc.1 containerd true true} ...
	I1219 03:54:56.722247   55963 kubeadm.go:947] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-rc.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-979595 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.160
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1219 03:54:56.722328   55963 ssh_runner.go:195] Run: sudo crictl --timeout=10s info
	I1219 03:54:56.753965   55963 cni.go:84] Creating CNI manager for ""
	I1219 03:54:56.753986   55963 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:54:56.754002   55963 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1219 03:54:56.754037   55963 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.160 APIServerPort:8443 KubernetesVersion:v1.35.0-rc.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-979595 NodeName:newest-cni-979595 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.160"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.160 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///run/containerd/containerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1219 03:54:56.754146   55963 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.160
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///run/containerd/containerd.sock
	  name: "newest-cni-979595"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.61.160"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.160"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-rc.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///run/containerd/containerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1219 03:54:56.754224   55963 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-rc.1
	I1219 03:54:56.766080   55963 binaries.go:51] Found k8s binaries, skipping transfer
	I1219 03:54:56.766148   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1219 03:54:56.777281   55963 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (328 bytes)
	I1219 03:54:56.797092   55963 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I1219 03:54:56.817304   55963 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2239 bytes)
	I1219 03:54:56.836407   55963 ssh_runner.go:195] Run: grep 192.168.61.160	control-plane.minikube.internal$ /etc/hosts
	I1219 03:54:56.840292   55963 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.160	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1219 03:54:56.853974   55963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:54:56.992232   55963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:54:57.011998   55963 certs.go:69] Setting up /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595 for IP: 192.168.61.160
	I1219 03:54:57.012043   55963 certs.go:195] generating shared ca certs ...
	I1219 03:54:57.012064   55963 certs.go:227] acquiring lock for ca certs: {Name:mk6db7e23547b9013e447eaa0ddba18e05213211 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.012227   55963 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key
	I1219 03:54:57.012306   55963 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key
	I1219 03:54:57.012329   55963 certs.go:257] generating profile certs ...
	I1219 03:54:57.012411   55963 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.key
	I1219 03:54:57.012428   55963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.crt with IP's: []
	I1219 03:54:57.197688   55963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.crt ...
	I1219 03:54:57.197720   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.crt: {Name:mk3b8bf7bf09d915f40d5134b28b49156ad5fa97 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.197896   55963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.key ...
	I1219 03:54:57.197912   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/client.key: {Name:mk114209833ecaab5a13a4c642b251e434c0055e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.198063   55963 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.key.c76ec76b
	I1219 03:54:57.198092   55963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.crt.c76ec76b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.61.160]
	I1219 03:54:57.264665   55963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.crt.c76ec76b ...
	I1219 03:54:57.264697   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.crt.c76ec76b: {Name:mk6cd9b00a43b318995c34547434ddebff14db8a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.264859   55963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.key.c76ec76b ...
	I1219 03:54:57.264871   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.key.c76ec76b: {Name:mkb062aef8c4629e4099e389ef200083e799a7ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.264950   55963 certs.go:382] copying /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.crt.c76ec76b -> /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.crt
	I1219 03:54:57.265032   55963 certs.go:386] copying /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.key.c76ec76b -> /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.key
	I1219 03:54:57.265087   55963 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.key
	I1219 03:54:57.265102   55963 crypto.go:68] Generating cert /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.crt with IP's: []
	I1219 03:54:57.508411   55963 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.crt ...
	I1219 03:54:57.508445   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.crt: {Name:mk96794235a94545595c39881e3c38ec9896e963 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.508659   55963 crypto.go:164] Writing key to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.key ...
	I1219 03:54:57.508679   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.key: {Name:mk57f0865452726e9a9aeb875704754aa2db5065 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:54:57.508933   55963 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem (1338 bytes)
	W1219 03:54:57.508980   55963 certs.go:480] ignoring /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978_empty.pem, impossibly tiny 0 bytes
	I1219 03:54:57.508993   55963 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca-key.pem (1675 bytes)
	I1219 03:54:57.509041   55963 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/ca.pem (1082 bytes)
	I1219 03:54:57.509071   55963 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/cert.pem (1123 bytes)
	I1219 03:54:57.509094   55963 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/certs/key.pem (1675 bytes)
	I1219 03:54:57.509142   55963 certs.go:484] found cert: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem (1708 bytes)
	I1219 03:54:57.509677   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1219 03:54:57.547036   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1219 03:54:57.585218   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1219 03:54:57.616478   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1219 03:54:57.647379   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1219 03:54:57.676985   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1219 03:54:57.707092   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1219 03:54:57.740811   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/newest-cni-979595/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1219 03:54:57.772912   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1219 03:54:57.802088   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/certs/8978.pem --> /usr/share/ca-certificates/8978.pem (1338 bytes)
	I1219 03:54:57.829711   55963 ssh_runner.go:362] scp /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/ssl/certs/89782.pem --> /usr/share/ca-certificates/89782.pem (1708 bytes)
	I1219 03:54:57.858263   55963 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (722 bytes)
	I1219 03:54:57.878430   55963 ssh_runner.go:195] Run: openssl version
	I1219 03:54:57.884891   55963 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:57.896839   55963 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1219 03:54:57.908216   55963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:57.913742   55963 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 19 02:26 /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:57.913806   55963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1219 03:54:57.922911   55963 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1219 03:54:57.936341   55963 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1219 03:54:57.948191   55963 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/8978.pem
	I1219 03:54:57.960345   55963 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/8978.pem /etc/ssl/certs/8978.pem
	I1219 03:54:57.972706   55963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/8978.pem
	I1219 03:54:57.978239   55963 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 19 02:37 /usr/share/ca-certificates/8978.pem
	I1219 03:54:57.978309   55963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/8978.pem
	I1219 03:54:57.985785   55963 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1219 03:54:57.997676   55963 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/8978.pem /etc/ssl/certs/51391683.0
	I1219 03:54:58.009312   55963 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/89782.pem
	I1219 03:54:58.020962   55963 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/89782.pem /etc/ssl/certs/89782.pem
	I1219 03:54:58.032696   55963 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/89782.pem
	I1219 03:54:58.037856   55963 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 19 02:37 /usr/share/ca-certificates/89782.pem
	I1219 03:54:58.037905   55963 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/89782.pem
	I1219 03:54:58.044950   55963 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:58.057137   55963 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/89782.pem /etc/ssl/certs/3ec20f2e.0
	I1219 03:54:58.069673   55963 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1219 03:54:58.074695   55963 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1219 03:54:58.074775   55963 kubeadm.go:401] StartCluster: {Name:newest-cni-979595 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35
.0-rc.1 ClusterName:newest-cni-979595 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.160 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 03:54:58.074868   55963 cri.go:57] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I1219 03:54:58.074957   55963 ssh_runner.go:195] Run: sudo -s eval "crictl --timeout=10s ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1219 03:54:58.116741   55963 cri.go:92] found id: ""
	I1219 03:54:58.116833   55963 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1219 03:54:58.131872   55963 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1219 03:54:58.146657   55963 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1219 03:54:58.160843   55963 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1219 03:54:58.160867   55963 kubeadm.go:158] found existing configuration files:
	
	I1219 03:54:58.160925   55963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1219 03:54:58.173319   55963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1219 03:54:58.173379   55963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1219 03:54:58.185186   55963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1219 03:54:58.197472   55963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1219 03:54:58.197526   55963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1219 03:54:58.209827   55963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1219 03:54:58.222147   55963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1219 03:54:58.222210   55963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1219 03:54:58.234405   55963 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1219 03:54:58.245937   55963 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1219 03:54:58.246058   55963 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1219 03:54:58.259815   55963 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-rc.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I1219 03:54:58.509948   55963 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1219 03:55:07.557887   55963 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-rc.1
	I1219 03:55:07.557969   55963 kubeadm.go:319] [preflight] Running pre-flight checks
	I1219 03:55:07.558111   55963 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1219 03:55:07.558258   55963 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1219 03:55:07.558424   55963 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1219 03:55:07.558498   55963 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1219 03:55:07.559938   55963 out.go:252]   - Generating certificates and keys ...
	I1219 03:55:07.560033   55963 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1219 03:55:07.560137   55963 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1219 03:55:07.560197   55963 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1219 03:55:07.560274   55963 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1219 03:55:07.560349   55963 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1219 03:55:07.560395   55963 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1219 03:55:07.560442   55963 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1219 03:55:07.560548   55963 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-979595] and IPs [192.168.61.160 127.0.0.1 ::1]
	I1219 03:55:07.560607   55963 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1219 03:55:07.560757   55963 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-979595] and IPs [192.168.61.160 127.0.0.1 ::1]
	I1219 03:55:07.560855   55963 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1219 03:55:07.560944   55963 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1219 03:55:07.561020   55963 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1219 03:55:07.561099   55963 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1219 03:55:07.561154   55963 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1219 03:55:07.561205   55963 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1219 03:55:07.561253   55963 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1219 03:55:07.561315   55963 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1219 03:55:07.561361   55963 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1219 03:55:07.561432   55963 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1219 03:55:07.561488   55963 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1219 03:55:07.562608   55963 out.go:252]   - Booting up control plane ...
	I1219 03:55:07.562686   55963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1219 03:55:07.562774   55963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1219 03:55:07.562858   55963 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1219 03:55:07.562970   55963 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1219 03:55:07.563149   55963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1219 03:55:07.563305   55963 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1219 03:55:07.563428   55963 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1219 03:55:07.563476   55963 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1219 03:55:07.563585   55963 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1219 03:55:07.563674   55963 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1219 03:55:07.563723   55963 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 501.640511ms
	I1219 03:55:07.563801   55963 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1219 03:55:07.563869   55963 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.61.160:8443/livez
	I1219 03:55:07.563945   55963 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1219 03:55:07.564064   55963 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1219 03:55:07.564174   55963 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.509901433s
	I1219 03:55:07.564267   55963 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 3.704468718s
	I1219 03:55:07.564384   55963 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 5.501658332s
	I1219 03:55:07.564533   55963 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1219 03:55:07.564687   55963 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1219 03:55:07.564781   55963 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1219 03:55:07.565049   55963 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-979595 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1219 03:55:07.565140   55963 kubeadm.go:319] [bootstrap-token] Using token: lxoixs.rxtcutk8hh24sg4p
	I1219 03:55:07.566509   55963 out.go:252]   - Configuring RBAC rules ...
	I1219 03:55:07.566634   55963 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1219 03:55:07.566739   55963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1219 03:55:07.566918   55963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1219 03:55:07.567098   55963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1219 03:55:07.567211   55963 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1219 03:55:07.567334   55963 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1219 03:55:07.567460   55963 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1219 03:55:07.567498   55963 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1219 03:55:07.567552   55963 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1219 03:55:07.567558   55963 kubeadm.go:319] 
	I1219 03:55:07.567611   55963 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1219 03:55:07.567616   55963 kubeadm.go:319] 
	I1219 03:55:07.567715   55963 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1219 03:55:07.567724   55963 kubeadm.go:319] 
	I1219 03:55:07.567745   55963 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1219 03:55:07.567816   55963 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1219 03:55:07.567869   55963 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1219 03:55:07.567874   55963 kubeadm.go:319] 
	I1219 03:55:07.567919   55963 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1219 03:55:07.567929   55963 kubeadm.go:319] 
	I1219 03:55:07.567998   55963 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1219 03:55:07.568018   55963 kubeadm.go:319] 
	I1219 03:55:07.568085   55963 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1219 03:55:07.568155   55963 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1219 03:55:07.568210   55963 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1219 03:55:07.568217   55963 kubeadm.go:319] 
	I1219 03:55:07.568338   55963 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1219 03:55:07.568439   55963 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1219 03:55:07.568449   55963 kubeadm.go:319] 
	I1219 03:55:07.568555   55963 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token lxoixs.rxtcutk8hh24sg4p \
	I1219 03:55:07.568723   55963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1b58dfad10804396e47de30c3a743eeeceafad4940a4231204a7d6147cee8c31 \
	I1219 03:55:07.568770   55963 kubeadm.go:319] 	--control-plane 
	I1219 03:55:07.568780   55963 kubeadm.go:319] 
	I1219 03:55:07.568871   55963 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1219 03:55:07.568879   55963 kubeadm.go:319] 
	I1219 03:55:07.569026   55963 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token lxoixs.rxtcutk8hh24sg4p \
	I1219 03:55:07.569189   55963 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:1b58dfad10804396e47de30c3a743eeeceafad4940a4231204a7d6147cee8c31 
	I1219 03:55:07.569204   55963 cni.go:84] Creating CNI manager for ""
	I1219 03:55:07.569212   55963 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 03:55:07.571355   55963 out.go:179] * Configuring bridge CNI (Container Networking Interface) ...
	I1219 03:55:07.572422   55963 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I1219 03:55:07.587228   55963 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I1219 03:55:07.614253   55963 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1219 03:55:07.614321   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:07.614341   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-979595 minikube.k8s.io/updated_at=2025_12_19T03_55_07_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6 minikube.k8s.io/name=newest-cni-979595 minikube.k8s.io/primary=true
	I1219 03:55:07.740271   55963 ops.go:34] apiserver oom_adj: -16
	I1219 03:55:07.740308   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:08.241150   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:08.740927   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:09.241115   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:09.741207   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:10.241264   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:10.741178   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:11.240546   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:11.741361   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:12.241364   55963 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1219 03:55:12.353818   55963 kubeadm.go:1114] duration metric: took 4.739552362s to wait for elevateKubeSystemPrivileges
	I1219 03:55:12.353866   55963 kubeadm.go:403] duration metric: took 14.279097721s to StartCluster
	I1219 03:55:12.353889   55963 settings.go:142] acquiring lock: {Name:mk7f7ba85357bfc9fca2e66b70b16d967ca355d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:55:12.353966   55963 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:55:12.354862   55963 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/22230-5003/kubeconfig: {Name:mkddc4d888673d0300234e1930e26db252efd15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1219 03:55:12.355198   55963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1219 03:55:12.355226   55963 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.61.160 Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}
	I1219 03:55:12.355299   55963 addons.go:543] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1219 03:55:12.355377   55963 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-979595"
	I1219 03:55:12.355404   55963 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-979595"
	I1219 03:55:12.355414   55963 addons.go:70] Setting default-storageclass=true in profile "newest-cni-979595"
	I1219 03:55:12.355435   55963 config.go:182] Loaded profile config "newest-cni-979595": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 03:55:12.355453   55963 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-979595"
	I1219 03:55:12.355437   55963 host.go:66] Checking if "newest-cni-979595" exists ...
	I1219 03:55:12.356868   55963 out.go:179] * Verifying Kubernetes components...
	I1219 03:55:12.358109   55963 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1219 03:55:12.358135   55963 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1219 03:55:12.358960   55963 addons.go:239] Setting addon default-storageclass=true in "newest-cni-979595"
	I1219 03:55:12.358993   55963 host.go:66] Checking if "newest-cni-979595" exists ...
	I1219 03:55:12.359792   55963 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:55:12.359812   55963 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1219 03:55:12.360928   55963 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1219 03:55:12.360949   55963 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1219 03:55:12.362842   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:55:12.363243   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:55:12.363269   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:55:12.363413   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:55:12.363935   55963 main.go:144] libmachine: domain newest-cni-979595 has defined MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:55:12.364329   55963 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:e5:ac:6b", ip: ""} in network mk-newest-cni-979595: {Iface:virbr3 ExpiryTime:2025-12-19 04:54:43 +0000 UTC Type:0 Mac:52:54:00:e5:ac:6b Iaid: IPaddr:192.168.61.160 Prefix:24 Hostname:newest-cni-979595 Clientid:01:52:54:00:e5:ac:6b}
	I1219 03:55:12.364359   55963 main.go:144] libmachine: domain newest-cni-979595 has defined IP address 192.168.61.160 and MAC address 52:54:00:e5:ac:6b in network mk-newest-cni-979595
	I1219 03:55:12.364539   55963 sshutil.go:53] new ssh client: &{IP:192.168.61.160 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/newest-cni-979595/id_rsa Username:docker}
	I1219 03:55:12.580946   55963 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.61.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1219 03:55:12.669084   55963 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1219 03:55:12.906114   55963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1219 03:55:13.051643   55963 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1219 03:55:13.533355   55963 start.go:977] {"host.minikube.internal": 192.168.61.1} host record injected into CoreDNS's ConfigMap
	I1219 03:55:13.534182   55963 api_server.go:52] waiting for apiserver process to appear ...
	I1219 03:55:13.534238   55963 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:55:14.084617   55963 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-979595" context rescaled to 1 replicas
	I1219 03:55:14.365031   55963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.458868909s)
	I1219 03:55:14.365057   55963 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-rc.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.313388712s)
	I1219 03:55:14.365135   55963 api_server.go:72] duration metric: took 2.009875181s to wait for apiserver process to appear ...
	I1219 03:55:14.365157   55963 api_server.go:88] waiting for apiserver healthz status ...
	I1219 03:55:14.365179   55963 api_server.go:253] Checking apiserver healthz at https://192.168.61.160:8443/healthz ...
	I1219 03:55:14.385782   55963 api_server.go:279] https://192.168.61.160:8443/healthz returned 200:
	ok
	I1219 03:55:14.385852   55963 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1219 03:55:14.387067   55963 addons.go:546] duration metric: took 2.031765655s for enable addons: enabled=[storage-provisioner default-storageclass]
	I1219 03:55:14.390039   55963 api_server.go:141] control plane version: v1.35.0-rc.1
	I1219 03:55:14.390069   55963 api_server.go:131] duration metric: took 24.904687ms to wait for apiserver health ...
	I1219 03:55:14.390081   55963 system_pods.go:43] waiting for kube-system pods to appear ...
	I1219 03:55:14.399506   55963 system_pods.go:59] 8 kube-system pods found
	I1219 03:55:14.399585   55963 system_pods.go:61] "coredns-7d764666f9-6lzt7" [87ce3c78-1f13-4656-a6f0-7c6b53f5f886] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:55:14.399601   55963 system_pods.go:61] "coredns-7d764666f9-pj7c8" [529bcf8c-b13d-49a7-aa69-b208f1056c5d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1219 03:55:14.399640   55963 system_pods.go:61] "etcd-newest-cni-979595" [019ba627-348d-4190-a22a-21fef7f603ea] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1219 03:55:14.399657   55963 system_pods.go:61] "kube-apiserver-newest-cni-979595" [b1c185c8-ef29-413e-b816-c2168e42b125] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1219 03:55:14.399663   55963 system_pods.go:61] "kube-controller-manager-newest-cni-979595" [73932231-0ed5-42d9-b6b4-6e7f90b36588] Running
	I1219 03:55:14.399670   55963 system_pods.go:61] "kube-proxy-mnzpz" [a3c861f6-50e1-4725-9baa-078742243a53] Running
	I1219 03:55:14.399681   55963 system_pods.go:61] "kube-scheduler-newest-cni-979595" [e8d71cdc-4d97-4174-9374-67c0e1be7d0d] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1219 03:55:14.399687   55963 system_pods.go:61] "storage-provisioner" [9e481219-e700-4f32-b76c-1fc48488f1e9] Pending
	I1219 03:55:14.399702   55963 system_pods.go:74] duration metric: took 9.612762ms to wait for pod list to return data ...
	I1219 03:55:14.399716   55963 default_sa.go:34] waiting for default service account to be created ...
	I1219 03:55:14.410708   55963 default_sa.go:45] found service account: "default"
	I1219 03:55:14.410734   55963 default_sa.go:55] duration metric: took 11.010706ms for default service account to be created ...
	I1219 03:55:14.410748   55963 kubeadm.go:587] duration metric: took 2.055492178s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1219 03:55:14.410769   55963 node_conditions.go:102] verifying NodePressure condition ...
	I1219 03:55:14.415814   55963 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I1219 03:55:14.415845   55963 node_conditions.go:123] node cpu capacity is 2
	I1219 03:55:14.415870   55963 node_conditions.go:105] duration metric: took 5.095159ms to run NodePressure ...
	I1219 03:55:14.415884   55963 start.go:242] waiting for startup goroutines ...
	I1219 03:55:14.415894   55963 start.go:247] waiting for cluster config update ...
	I1219 03:55:14.415912   55963 start.go:256] writing updated cluster config ...
	I1219 03:55:14.416192   55963 ssh_runner.go:195] Run: rm -f paused
	I1219 03:55:14.481109   55963 start.go:625] kubectl: 1.35.0, cluster: 1.35.0-rc.1 (minor skew: 0)
	I1219 03:55:14.483092   55963 out.go:179] * Done! kubectl is now configured to use "newest-cni-979595" cluster and "default" namespace by default
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                                   ATTEMPT             POD ID              POD                                                     NAMESPACE
	8858312cf5133       6e38f40d628db       17 minutes ago      Running             storage-provisioner                    2                   2a7b73727c50b       storage-provisioner                                     kube-system
	00cd7ad611cf4       3a975970da2f5       17 minutes ago      Running             proxy                                  0                   aec924010950e       kubernetes-dashboard-kong-9849c64bd-wgdnx               kubernetes-dashboard
	7f4bc72ab8030       3a975970da2f5       17 minutes ago      Exited              clear-stale-pid                        0                   aec924010950e       kubernetes-dashboard-kong-9849c64bd-wgdnx               kubernetes-dashboard
	0d9d949e94e6f       59f642f485d26       18 minutes ago      Running             kubernetes-dashboard-web               0                   20f11d2bbf1a0       kubernetes-dashboard-web-5c9f966b98-wwbc2               kubernetes-dashboard
	503098129741b       a0607af4fcd8a       18 minutes ago      Running             kubernetes-dashboard-api               0                   a238ed45df354       kubernetes-dashboard-api-5444544855-rgb27               kubernetes-dashboard
	4b740253b2e42       d9cbc9f4053ca       18 minutes ago      Running             kubernetes-dashboard-metrics-scraper   0                   c8401c8fc52a8       kubernetes-dashboard-metrics-scraper-7685fd8b77-kfx97   kubernetes-dashboard
	a0b5497708d9c       dd54374d0ab14       18 minutes ago      Running             kubernetes-dashboard-auth              0                   3b5b7035cc28c       kubernetes-dashboard-auth-75d54f6f86-bnd95              kubernetes-dashboard
	e11bfb213730c       56cc512116c8f       18 minutes ago      Running             busybox                                1                   f3a8800713fd9       busybox                                                 default
	89d545dd0db17       52546a367cc9e       18 minutes ago      Running             coredns                                1                   c4b48d81be80a       coredns-66bc5c9577-bzq6s                                kube-system
	ced946dadbf7a       6e38f40d628db       18 minutes ago      Exited              storage-provisioner                    1                   2a7b73727c50b       storage-provisioner                                     kube-system
	2b86f1c041410       36eef8e07bdd6       18 minutes ago      Running             kube-proxy                             1                   6c0050aa200d4       kube-proxy-vhml9                                        kube-system
	26b15c351a7f5       a3e246e9556e9       18 minutes ago      Running             etcd                                   1                   3d3fe1695a330       etcd-default-k8s-diff-port-382606                       kube-system
	417d2eb47c0a9       aec12dadf56dd       18 minutes ago      Running             kube-scheduler                         1                   0ca1b2caa6989       kube-scheduler-default-k8s-diff-port-382606             kube-system
	518a94577bb7d       aa27095f56193       18 minutes ago      Running             kube-apiserver                         1                   39ec5ef103cca       kube-apiserver-default-k8s-diff-port-382606             kube-system
	9caa0d440527b       5826b25d990d7       18 minutes ago      Running             kube-controller-manager                1                   07eba73b830e6       kube-controller-manager-default-k8s-diff-port-382606    kube-system
	45c1210726c66       56cc512116c8f       20 minutes ago      Exited              busybox                                0                   c511096e8686d       busybox                                                 default
	bae993c63f9a1       52546a367cc9e       21 minutes ago      Exited              coredns                                0                   0a051dd2a97a2       coredns-66bc5c9577-bzq6s                                kube-system
	e26689632e68d       36eef8e07bdd6       21 minutes ago      Exited              kube-proxy                             0                   8b82aa902baee       kube-proxy-vhml9                                        kube-system
	4acb45618ed01       aa27095f56193       21 minutes ago      Exited              kube-apiserver                         0                   7d900779b2564       kube-apiserver-default-k8s-diff-port-382606             kube-system
	7a54851195b09       aec12dadf56dd       21 minutes ago      Exited              kube-scheduler                         0                   a3d5034d7c252       kube-scheduler-default-k8s-diff-port-382606             kube-system
	d61732768d3fb       a3e246e9556e9       21 minutes ago      Exited              etcd                                   0                   a070eff3fe2f6       etcd-default-k8s-diff-port-382606                       kube-system
	f5b37d825fd69       5826b25d990d7       21 minutes ago      Exited              kube-controller-manager                0                   259e85dc6f99e       kube-controller-manager-default-k8s-diff-port-382606    kube-system
	
	
	==> containerd <==
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.049605034Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3e588983-8f37-472c-8234-e7dd2e1a6a4a/89d545dd0db1733ee4daff06cee68794fa2612e7839e9455fde9fb6eabbb7ef2/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.052526284Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4ceeb65a-96a3-46f8-b5bb-9eee51c1d4a4/a0b5497708d9ced75d92291f6ac713d36cce94b49103ad89b2dc84b7aa7aa541/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.053427050Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2d4447ed-82a8-491a-a4e1-627981605a48/4b740253b2e42187c5c5011e1460253c9b2cd55a5b3261393ba8f6ef0a57337c/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.054327599Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podce3f0db8d16dacb79fc90e036faf5ce3/26b15c351a7f5021930cc9f51f1954766cef758efb477c6f239d48814b55ad89/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.056073626Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8bec61eb-4ec4-4f3f-abf1-d471842e5929/2b86f1c0414105cede5a3456f97249cb2d21efe909ab37873e3ce4a615c86eab/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.056982942Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4716d535-618f-4469-b896-418b93cfe8af/0d9d949e94e6f380d0ff910f058da5d15e70ebec0bbbf77042abc4fc76dd78d4/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.058714837Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod858ad0d3-1b87-42c8-9494-039b5e1da647/00cd7ad611cf47d9b49840544352ba45da7e52058115f4962ead6fd3e4db4d73/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.060987480Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod10e715ce-7edc-4af5-93e0-e975d561cdf3/8858312cf5133d1dadaf156295497a020ac7e467c5a2f7f19132111df9e8becd/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.062071208Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod75e075711d0e80a5b7777d004254cc7c/518a94577bb7de5f6f62c647a74b67c644a3a4cea449ef09d71d4ad5df9ad912/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.063011928Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/poded75af231424877e71cf9380aa17a357/417d2eb47c0a973814eca73db740808aaf83346035e5cd9c14ff6314a66d7849/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.063997680Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda15483a8-253a-46ba-89cf-a7281f75888f/e11bfb213730cb86d3eb541d27aa238dea64dfca5a8d94c2bd926d545c9d6e2f/hugetlb.2MB.events\""
	Dec 19 03:55:19 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:19.065584351Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod6b019719-0fa6-4169-a8d7-56eb6752bd14/503098129741b3077535c690ccb45bf024c8d611f90bec4ecbbe47b18c85deb3/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.083652463Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod8bec61eb-4ec4-4f3f-abf1-d471842e5929/2b86f1c0414105cede5a3456f97249cb2d21efe909ab37873e3ce4a615c86eab/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.084865560Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4716d535-618f-4469-b896-418b93cfe8af/0d9d949e94e6f380d0ff910f058da5d15e70ebec0bbbf77042abc4fc76dd78d4/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.086262137Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod858ad0d3-1b87-42c8-9494-039b5e1da647/00cd7ad611cf47d9b49840544352ba45da7e52058115f4962ead6fd3e4db4d73/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.087160164Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/pod10e715ce-7edc-4af5-93e0-e975d561cdf3/8858312cf5133d1dadaf156295497a020ac7e467c5a2f7f19132111df9e8becd/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.087879165Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod75e075711d0e80a5b7777d004254cc7c/518a94577bb7de5f6f62c647a74b67c644a3a4cea449ef09d71d4ad5df9ad912/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.088822377Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/poded75af231424877e71cf9380aa17a357/417d2eb47c0a973814eca73db740808aaf83346035e5cd9c14ff6314a66d7849/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.089719909Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/besteffort/poda15483a8-253a-46ba-89cf-a7281f75888f/e11bfb213730cb86d3eb541d27aa238dea64dfca5a8d94c2bd926d545c9d6e2f/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.091017685Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod6b019719-0fa6-4169-a8d7-56eb6752bd14/503098129741b3077535c690ccb45bf024c8d611f90bec4ecbbe47b18c85deb3/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.092317375Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podfc30c752e5dce8dd9191842cbc279eb5/9caa0d440527b29d084b74cd9fa77197ce53354e034e86874876263937324b73/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.093243777Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod3e588983-8f37-472c-8234-e7dd2e1a6a4a/89d545dd0db1733ee4daff06cee68794fa2612e7839e9455fde9fb6eabbb7ef2/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.093968835Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod4ceeb65a-96a3-46f8-b5bb-9eee51c1d4a4/a0b5497708d9ced75d92291f6ac713d36cce94b49103ad89b2dc84b7aa7aa541/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.094793711Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/pod2d4447ed-82a8-491a-a4e1-627981605a48/4b740253b2e42187c5c5011e1460253c9b2cd55a5b3261393ba8f6ef0a57337c/hugetlb.2MB.events\""
	Dec 19 03:55:29 default-k8s-diff-port-382606 containerd[723]: time="2025-12-19T03:55:29.095667364Z" level=error msg="unable to parse \"max 0\" as a uint from Cgroup file \"/sys/fs/cgroup/kubepods/burstable/podce3f0db8d16dacb79fc90e036faf5ce3/26b15c351a7f5021930cc9f51f1954766cef758efb477c6f239d48814b55ad89/hugetlb.2MB.events\""
	
	
	==> coredns [89d545dd0db1733ee4daff06cee68794fa2612e7839e9455fde9fb6eabbb7ef2] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49729 - 57509 "HINFO IN 9161108537065804054.7799302224143394389. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017461273s
	
	
	==> coredns [bae993c63f9a1e56bf48f73918e0a8f4f7ffcca3fa410b748fc2e1a3a59b5bbe] <==
	maxprocs: Leaving GOMAXPROCS=2: CPU quota undefined
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 1e9477b8ea56ebab8df02f3cc3fb780e34e7eaf8b09bececeeafb7bdf5213258aac3abbfeb320bc10fb8083d88700566a605aa1a4c00dddf9b599a38443364da
	[INFO] Reloading complete
	[INFO] 127.0.0.1:50735 - 17360 "HINFO IN 2174463226158819289.5172247921982077030. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017346048s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-382606
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-382606
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d7bd998f643f77295f2e0ab31c763be310dbe1a6
	                    minikube.k8s.io/name=default-k8s-diff-port-382606
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_12_19T03_34_08_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Dec 2025 03:34:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-382606
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Dec 2025 03:55:32 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Dec 2025 03:54:00 +0000   Fri, 19 Dec 2025 03:34:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Dec 2025 03:54:00 +0000   Fri, 19 Dec 2025 03:34:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Dec 2025 03:54:00 +0000   Fri, 19 Dec 2025 03:34:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Dec 2025 03:54:00 +0000   Fri, 19 Dec 2025 03:37:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.129
	  Hostname:    default-k8s-diff-port-382606
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3035912Ki
	  pods:               110
	System Info:
	  Machine ID:                 342506c19e124922943823d9d57eea28
	  System UUID:                342506c1-9e12-4922-9438-23d9d57eea28
	  Boot ID:                    7f2ea5ee-7aae-4716-9364-8ec21adb7cea
	  Kernel Version:             6.6.95
	  OS Image:                   Buildroot 2025.02
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  containerd://2.2.0
	  Kubelet Version:            v1.34.3
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         20m
	  kube-system                 coredns-66bc5c9577-bzq6s                                 100m (5%)     0 (0%)      70Mi (2%)        170Mi (5%)     21m
	  kube-system                 etcd-default-k8s-diff-port-382606                        100m (5%)     0 (0%)      100Mi (3%)       0 (0%)         21m
	  kube-system                 kube-apiserver-default-k8s-diff-port-382606              250m (12%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-382606     200m (10%)    0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-proxy-vhml9                                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 kube-scheduler-default-k8s-diff-port-382606              100m (5%)     0 (0%)      0 (0%)           0 (0%)         21m
	  kube-system                 metrics-server-746fcd58dc-xphdl                          100m (5%)     0 (0%)      200Mi (6%)       0 (0%)         20m
	  kube-system                 storage-provisioner                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         21m
	  kubernetes-dashboard        kubernetes-dashboard-api-5444544855-rgb27                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-auth-75d54f6f86-bnd95               100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-kong-9849c64bd-wgdnx                0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kubernetes-dashboard        kubernetes-dashboard-metrics-scraper-7685fd8b77-kfx97    100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	  kubernetes-dashboard        kubernetes-dashboard-web-5c9f966b98-wwbc2                100m (5%)     250m (12%)  200Mi (6%)       400Mi (13%)    18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests      Limits
	  --------           --------      ------
	  cpu                1250m (62%)   1 (50%)
	  memory             1170Mi (39%)  1770Mi (59%)
	  ephemeral-storage  0 (0%)        0 (0%)
	  hugepages-2Mi      0 (0%)        0 (0%)
	Events:
	  Type     Reason                   Age                From             Message
	  ----     ------                   ----               ----             -------
	  Normal   Starting                 21m                kube-proxy       
	  Normal   Starting                 18m                kube-proxy       
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 21m                kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    21m                kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     21m                kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientPID
	  Normal   NodeReady                21m                kubelet          Node default-k8s-diff-port-382606 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  21m                kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           21m                node-controller  Node default-k8s-diff-port-382606 event: Registered Node default-k8s-diff-port-382606 in Controller
	  Normal   Starting                 18m                kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node default-k8s-diff-port-382606 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 18m                kubelet          Node default-k8s-diff-port-382606 has been rebooted, boot id: 7f2ea5ee-7aae-4716-9364-8ec21adb7cea
	  Normal   RegisteredNode           18m                node-controller  Node default-k8s-diff-port-382606 event: Registered Node default-k8s-diff-port-382606 in Controller
	
	
	==> dmesg <==
	[Dec19 03:36] Booted with the nomodeset parameter. Only the system framebuffer will be available
	[  +0.000012] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.001605] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +0.008324] (rpcbind)[121]: rpcbind.service: Referenced but unset environment variable evaluates to an empty string: RPCBIND_OPTIONS
	[  +0.758703] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +0.091309] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.104294] kauditd_printk_skb: 102 callbacks suppressed
	[Dec19 03:37] kauditd_printk_skb: 171 callbacks suppressed
	[  +0.000088] kauditd_printk_skb: 128 callbacks suppressed
	[  +3.473655] kauditd_printk_skb: 338 callbacks suppressed
	[  +6.923891] kauditd_printk_skb: 55 callbacks suppressed
	[  +6.599835] kauditd_printk_skb: 11 callbacks suppressed
	[ +10.729489] kauditd_printk_skb: 12 callbacks suppressed
	[ +13.266028] kauditd_printk_skb: 42 callbacks suppressed
	
	
	==> etcd [26b15c351a7f5021930cc9f51f1954766cef758efb477c6f239d48814b55ad89] <==
	{"level":"warn","ts":"2025-12-19T03:36:58.850275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.868907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.877249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.896080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:36:58.973742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.352322Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.382031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.399357Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.452062Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.493578Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.521013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.537079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.587673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.610112Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.632754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:37:35.660242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45266","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:46:57.429647Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1127}
	{"level":"info","ts":"2025-12-19T03:46:57.455560Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1127,"took":"25.339881ms","hash":3337858527,"current-db-size-bytes":4153344,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":1712128,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-12-19T03:46:57.455658Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3337858527,"revision":1127,"compact-revision":-1}
	{"level":"info","ts":"2025-12-19T03:51:57.438973Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1374}
	{"level":"info","ts":"2025-12-19T03:51:57.443923Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1374,"took":"3.908189ms","hash":1180286511,"current-db-size-bytes":4153344,"current-db-size":"4.2 MB","current-db-size-in-use-bytes":2027520,"current-db-size-in-use":"2.0 MB"}
	{"level":"info","ts":"2025-12-19T03:51:57.443974Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1180286511,"revision":1374,"compact-revision":1127}
	{"level":"warn","ts":"2025-12-19T03:54:55.599943Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"138.800293ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-12-19T03:54:55.600306Z","caller":"traceutil/trace.go:172","msg":"trace[314929656] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1765; }","duration":"139.23971ms","start":"2025-12-19T03:54:55.461048Z","end":"2025-12-19T03:54:55.600287Z","steps":["trace[314929656] 'range keys from in-memory index tree'  (duration: 138.760105ms)"],"step_count":1}
	{"level":"warn","ts":"2025-12-19T03:54:58.665561Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"144.861185ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8077095107282175664 > lease_revoke:<id:70179b34ae99864b>","response":"size:29"}
	
	
	==> etcd [d61732768d3fb333e642fdb4d61862c8d579da92cf7054c9aefef697244504e5] <==
	{"level":"warn","ts":"2025-12-19T03:34:04.298617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.314545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.327667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.343419Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.367627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59566","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.383689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59580","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.391821Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.405054Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.426047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59620","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.437252Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.447053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.458037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.473493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.479914Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.492986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59712","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.504629Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.516820Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.526538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.546193Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.563536Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.574441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.586915Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.598212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-12-19T03:34:04.679106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59886","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-12-19T03:34:17.949387Z","caller":"traceutil/trace.go:172","msg":"trace[941942705] transaction","detail":"{read_only:false; response_revision:432; number_of_response:1; }","duration":"182.138051ms","start":"2025-12-19T03:34:17.767230Z","end":"2025-12-19T03:34:17.949368Z","steps":["trace[941942705] 'process raft request'  (duration: 178.491344ms)"],"step_count":1}
	
	
	==> kernel <==
	 03:55:35 up 19 min,  0 users,  load average: 0.24, 0.14, 0.15
	Linux default-k8s-diff-port-382606 6.6.95 #1 SMP PREEMPT_DYNAMIC Wed Dec 17 12:49:57 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2025.02"
	
	
	==> kube-apiserver [4acb45618ed01ef24f531e521f546b5b7cfd45d1747acae90c53e91dc213f0f6] <==
	I1219 03:34:07.918995       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1219 03:34:07.949209       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1219 03:34:12.651281       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1219 03:34:13.006263       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:34:13.015238       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1219 03:34:13.309617       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1219 03:35:06.602579       1 conn.go:339] Error on socket receive: read tcp 192.168.72.129:8444->192.168.72.1:48290: use of closed network connection
	I1219 03:35:07.259368       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W1219 03:35:07.267392       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:35:07.267522       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1219 03:35:07.267794       1 handler_proxy.go:143] error resolving kube-system/metrics-server: service "metrics-server" not found
	I1219 03:35:07.434940       1 alloc.go:328] "allocated clusterIPs" service="kube-system/metrics-server" clusterIPs={"IPv4":"10.109.46.149"}
	W1219 03:35:07.451685       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:35:07.451874       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	W1219 03:35:07.458566       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:35:07.458618       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	
	
	==> kube-apiserver [518a94577bb7de5f6f62c647a74b67c644a3a4cea449ef09d71d4ad5df9ad912] <==
	E1219 03:52:00.982144       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:52:00.982157       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E1219 03:52:00.982168       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:52:00.983559       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:53:00.983003       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:53:00.983367       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:53:00.983408       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:53:00.983708       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:53:00.983742       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:53:00.985050       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:55:00.984414       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:55:00.984670       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I1219 03:55:00.984715       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W1219 03:55:00.985887       1 handler_proxy.go:99] no RequestInfo found in the context
	E1219 03:55:00.985952       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I1219 03:55:00.985972       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9caa0d440527b29d084b74cd9fa77197ce53354e034e86874876263937324b73] <==
	I1219 03:49:07.019154       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:49:36.806473       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:49:37.029333       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:50:06.814898       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:50:07.039296       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:50:36.820433       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:50:37.050084       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:51:06.826744       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:51:07.059736       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:51:36.833699       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:51:37.068791       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:52:06.841656       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:52:07.078374       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:52:36.848258       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:52:37.089442       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:53:06.855066       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:53:07.099607       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:53:36.860757       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:53:37.110145       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:54:06.866434       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:54:07.119278       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:54:36.872907       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:54:37.128624       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	E1219 03:55:06.879593       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1219 03:55:07.141063       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-controller-manager [f5b37d825fd69470a5b010883e8992bc8a034549e4044c16dbcdc8b3f8ddc38c] <==
	I1219 03:34:12.353245       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1219 03:34:12.353348       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1219 03:34:12.357849       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1219 03:34:12.363113       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1219 03:34:12.368603       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-382606" podCIDRs=["10.244.0.0/24"]
	I1219 03:34:12.371147       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1219 03:34:12.395931       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1219 03:34:12.396045       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1219 03:34:12.396077       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1219 03:34:12.396605       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1219 03:34:12.397077       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1219 03:34:12.397259       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1219 03:34:12.397265       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1219 03:34:12.397627       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1219 03:34:12.398009       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1219 03:34:12.399266       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1219 03:34:12.399529       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1219 03:34:12.399544       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1219 03:34:12.400127       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1219 03:34:12.403873       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:34:12.406184       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1219 03:34:12.409683       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1219 03:34:12.409694       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1219 03:34:12.409698       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1219 03:34:12.416021       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	
	
	==> kube-proxy [2b86f1c0414105cede5a3456f97249cb2d21efe909ab37873e3ce4a615c86eab] <==
	I1219 03:37:01.799419       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:37:01.900130       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:37:01.900220       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.129"]
	E1219 03:37:01.900457       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:37:01.961090       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:37:01.961150       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:37:01.961192       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:37:01.977893       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:37:01.980700       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:37:01.980716       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:37:01.989122       1 config.go:200] "Starting service config controller"
	I1219 03:37:01.989158       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:37:01.989210       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:37:01.989217       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:37:01.989244       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:37:01.989248       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:37:01.990428       1 config.go:309] "Starting node config controller"
	I1219 03:37:01.990459       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:37:01.990465       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:37:02.089889       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1219 03:37:02.090284       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:37:02.089890       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-proxy [e26689632e68df2c19523daa71cb9f75163a38eef904ec1a7928da46204ceeb3] <==
	I1219 03:34:15.006486       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1219 03:34:15.109902       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1219 03:34:15.109972       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.72.129"]
	E1219 03:34:15.110290       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1219 03:34:15.302728       1 server_linux.go:103] "No iptables support for family" ipFamily="IPv6" error=<
		error listing chain "POSTROUTING" in table "nat": exit status 3: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
		Perhaps ip6tables or your kernel needs to be upgraded.
	 >
	I1219 03:34:15.302878       1 server.go:267] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I1219 03:34:15.302927       1 server_linux.go:132] "Using iptables Proxier"
	I1219 03:34:15.312759       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1219 03:34:15.313145       1 server.go:527] "Version info" version="v1.34.3"
	I1219 03:34:15.313160       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:34:15.318572       1 config.go:200] "Starting service config controller"
	I1219 03:34:15.318597       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1219 03:34:15.318612       1 config.go:106] "Starting endpoint slice config controller"
	I1219 03:34:15.318615       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1219 03:34:15.318624       1 config.go:403] "Starting serviceCIDR config controller"
	I1219 03:34:15.318627       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1219 03:34:15.319111       1 config.go:309] "Starting node config controller"
	I1219 03:34:15.319117       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1219 03:34:15.319127       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1219 03:34:15.419138       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1219 03:34:15.419225       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1219 03:34:15.419689       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [417d2eb47c0a973814eca73db740808aaf83346035e5cd9c14ff6314a66d7849] <==
	I1219 03:36:57.691193       1 serving.go:386] Generated self-signed cert in-memory
	W1219 03:36:59.833443       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1219 03:36:59.833582       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1219 03:36:59.833599       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1219 03:36:59.833606       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1219 03:36:59.894931       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.3"
	I1219 03:36:59.896570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1219 03:36:59.908197       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1219 03:36:59.913129       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:59.913408       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1219 03:36:59.915317       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1219 03:36:59.950227       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1219 03:37:01.316677       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kube-scheduler [7a54851195b097fc581bb4fcb777740f44d67a12f637e37cac19dc575fb76ead] <==
	E1219 03:34:05.426747       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:34:05.426838       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:34:05.426896       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:34:05.426945       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1219 03:34:05.426985       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:34:05.427030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1219 03:34:05.427070       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:34:05.427118       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1219 03:34:05.427165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:34:05.427207       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1219 03:34:05.427251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:34:05.428123       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1219 03:34:05.428644       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:34:05.428974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1219 03:34:05.430468       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1219 03:34:06.239376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1219 03:34:06.243449       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1219 03:34:06.395663       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1219 03:34:06.442628       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1219 03:34:06.546385       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1219 03:34:06.555806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1219 03:34:06.657760       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1219 03:34:06.673910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1219 03:34:06.677531       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	I1219 03:34:09.096650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Dec 19 03:51:04 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:51:04.767272    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:51:17 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:51:17.765910    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:51:30 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:51:30.766193    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:51:43 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:51:43.766023    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:51:56 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:51:56.765914    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:52:07 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:52:07.766316    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:52:20 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:52:20.766662    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:52:32 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:52:32.767034    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:52:43 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:52:43.765706    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:52:56 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:52:56.766373    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:53:08 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:53:08.769319    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:53:20 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:53:20.777455    1087 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:53:20 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:53:20.777766    1087 kuberuntime_image.go:43] "Failed to pull image" err="failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Dec 19 03:53:20 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:53:20.777914    1087 kuberuntime_manager.go:1449] "Unhandled Error" err="container metrics-server start failed in pod metrics-server-746fcd58dc-xphdl_kube-system(fb637b66-cb31-46cc-b490-110c2825cacc): ErrImagePull: failed to pull and unpack image \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to resolve reference \"fake.domain/registry.k8s.io/echoserver:1.4\": failed to do request: Head \"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\": dial tcp: lookup fake.domain: no such host" logger="UnhandledError"
	Dec 19 03:53:20 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:53:20.777975    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:53:35 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:53:35.767674    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:53:50 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:53:50.771073    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:54:02 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:54:02.766549    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:54:13 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:54:13.766688    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:54:28 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:54:28.766313    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:54:41 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:54:41.766355    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:54:55 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:54:55.766173    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:55:08 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:55:08.769900    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:55:20 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:55:20.768283    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	Dec 19 03:55:33 default-k8s-diff-port-382606 kubelet[1087]: E1219 03:55:33.766294    1087 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": ErrImagePull: failed to pull and unpack image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to resolve reference \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\": failed to do request: Head \\\"https://fake.domain/v2/registry.k8s.io/echoserver/manifests/1.4\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-746fcd58dc-xphdl" podUID="fb637b66-cb31-46cc-b490-110c2825cacc"
	
	
	==> kubernetes-dashboard [0d9d949e94e6f380d0ff910f058da5d15e70ebec0bbbf77042abc4fc76dd78d4] <==
	I1219 03:37:28.427541       1 main.go:37] "Starting Kubernetes Dashboard Web" version="1.7.0"
	I1219 03:37:28.427793       1 init.go:48] Using in-cluster config
	I1219 03:37:28.428240       1 main.go:57] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> kubernetes-dashboard [4b740253b2e42187c5c5011e1460253c9b2cd55a5b3261393ba8f6ef0a57337c] <==
	10.244.0.1 - - [19/Dec/2025:03:52:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:00 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:10 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:20 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:53:30 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:40 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:50 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:53:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:54:00 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:10 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:20 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:54:30 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:40 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:50 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:54:52 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:55:00 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:55:10 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:55:20 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	10.244.0.1 - - [19/Dec/2025:03:55:22 +0000] "GET /healthz HTTP/1.1" 200 13 "" "dashboard/dashboard-api:1.14.0"
	10.244.0.1 - - [19/Dec/2025:03:55:30 +0000] "GET / HTTP/1.1" 200 6 "" "kube-probe/1.34"
	E1219 03:53:18.351362       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:54:18.355219       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	E1219 03:55:18.352867       1 main.go:114] Error scraping node metrics: the server is currently unable to handle the request (get nodes.metrics.k8s.io)
	
	
	==> kubernetes-dashboard [503098129741b3077535c690ccb45bf024c8d611f90bec4ecbbe47b18c85deb3] <==
	I1219 03:37:21.811935       1 main.go:40] "Starting Kubernetes Dashboard API" version="1.14.0"
	I1219 03:37:21.812047       1 init.go:49] Using in-cluster config
	I1219 03:37:21.812747       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1219 03:37:21.812778       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1219 03:37:21.812784       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1219 03:37:21.812788       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1219 03:37:21.881419       1 main.go:119] "Successful initial request to the apiserver" version="v1.34.3"
	I1219 03:37:21.881477       1 client.go:265] Creating in-cluster Sidecar client
	I1219 03:37:21.890563       1 main.go:96] "Listening and serving on" address="0.0.0.0:8000"
	I1219 03:37:21.896833       1 manager.go:101] Successful request to sidecar
	
	
	==> kubernetes-dashboard [a0b5497708d9ced75d92291f6ac713d36cce94b49103ad89b2dc84b7aa7aa541] <==
	I1219 03:37:15.110759       1 main.go:34] "Starting Kubernetes Dashboard Auth" version="1.4.0"
	I1219 03:37:15.111110       1 init.go:49] Using in-cluster config
	I1219 03:37:15.111337       1 main.go:44] "Listening and serving insecurely on" address="0.0.0.0:8000"
	
	
	==> storage-provisioner [8858312cf5133d1dadaf156295497a020ac7e467c5a2f7f19132111df9e8becd] <==
	W1219 03:55:11.432165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:13.436785       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:13.442603       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:15.447576       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:15.456695       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:17.460209       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:17.465181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:19.469096       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:19.474756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:21.478384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:21.486769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:23.490439       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:23.495566       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:25.499218       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:25.508743       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:27.512019       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:27.518040       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:29.521834       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:29.527023       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:31.530586       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:31.535028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:33.538370       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:33.546827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:35.551140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1219 03:55:35.556696       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [ced946dadbf7a9872e9726febd61276e7a03119f9bb6394671740bb262877814] <==
	I1219 03:37:01.658661       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1219 03:37:31.667875       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
helpers_test.go:270: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:281: non-running pods: metrics-server-746fcd58dc-xphdl
helpers_test.go:283: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:286: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 describe pod metrics-server-746fcd58dc-xphdl
helpers_test.go:286: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-382606 describe pod metrics-server-746fcd58dc-xphdl: exit status 1 (59.998878ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-xphdl" not found

                                                
                                                
** /stderr **
helpers_test.go:288: kubectl --context default-k8s-diff-port-382606 describe pod metrics-server-746fcd58dc-xphdl: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.40s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//data (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//data
=== PAUSE TestISOImage/PersistentMounts//data

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//data
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /data | grep /data"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /data | grep /data": context deadline exceeded (2.003µs)
iso_test.go:99: failed to verify existence of "/data" mount. args "out/minikube-linux-amd64 -p guest-269272 ssh \"df -t ext4 /data | grep /data\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//data (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/docker | grep /var/lib/docker": context deadline exceeded (368ns)
iso_test.go:99: failed to verify existence of "/var/lib/docker" mount. args "out/minikube-linux-amd64 -p guest-269272 ssh \"df -t ext4 /var/lib/docker | grep /var/lib/docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/docker (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/cni (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/cni
=== PAUSE TestISOImage/PersistentMounts//var/lib/cni

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/cni
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/cni | grep /var/lib/cni": context deadline exceeded (364ns)
iso_test.go:99: failed to verify existence of "/var/lib/cni" mount. args "out/minikube-linux-amd64 -p guest-269272 ssh \"df -t ext4 /var/lib/cni | grep /var/lib/cni\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/cni (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/kubelet (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/kubelet
=== PAUSE TestISOImage/PersistentMounts//var/lib/kubelet

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/kubelet
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet": context deadline exceeded (264ns)
iso_test.go:99: failed to verify existence of "/var/lib/kubelet" mount. args "out/minikube-linux-amd64 -p guest-269272 ssh \"df -t ext4 /var/lib/kubelet | grep /var/lib/kubelet\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/kubelet (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/minikube (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/minikube
=== PAUSE TestISOImage/PersistentMounts//var/lib/minikube

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/minikube
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/minikube | grep /var/lib/minikube": context deadline exceeded (650ns)
iso_test.go:99: failed to verify existence of "/var/lib/minikube" mount. args "out/minikube-linux-amd64 -p guest-269272 ssh \"df -t ext4 /var/lib/minikube | grep /var/lib/minikube\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/minikube (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/toolbox (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/toolbox
=== PAUSE TestISOImage/PersistentMounts//var/lib/toolbox

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/toolbox
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox": context deadline exceeded (323ns)
iso_test.go:99: failed to verify existence of "/var/lib/toolbox" mount. args "out/minikube-linux-amd64 -p guest-269272 ssh \"df -t ext4 /var/lib/toolbox | grep /var/lib/toolbox\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/toolbox (0.00s)

                                                
                                    
x
+
TestISOImage/PersistentMounts//var/lib/boot2docker (0s)

                                                
                                                
=== RUN   TestISOImage/PersistentMounts//var/lib/boot2docker
=== PAUSE TestISOImage/PersistentMounts//var/lib/boot2docker

                                                
                                                

                                                
                                                
=== CONT  TestISOImage/PersistentMounts//var/lib/boot2docker
iso_test.go:97: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker"
iso_test.go:97: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker": context deadline exceeded (509ns)
iso_test.go:99: failed to verify existence of "/var/lib/boot2docker" mount. args "out/minikube-linux-amd64 -p guest-269272 ssh \"df -t ext4 /var/lib/boot2docker | grep /var/lib/boot2docker\"": context deadline exceeded
--- FAIL: TestISOImage/PersistentMounts//var/lib/boot2docker (0.00s)

                                                
                                    
x
+
TestISOImage/VersionJSON (0s)

                                                
                                                
=== RUN   TestISOImage/VersionJSON
iso_test.go:106: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "cat /version.json"
iso_test.go:106: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "cat /version.json": context deadline exceeded (283ns)
iso_test.go:108: failed to read /version.json. args "out/minikube-linux-amd64 -p guest-269272 ssh \"cat /version.json\"": context deadline exceeded
--- FAIL: TestISOImage/VersionJSON (0.00s)

                                                
                                    
x
+
TestISOImage/eBPFSupport (0s)

                                                
                                                
=== RUN   TestISOImage/eBPFSupport
iso_test.go:125: (dbg) Run:  out/minikube-linux-amd64 -p guest-269272 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'"
iso_test.go:125: (dbg) Non-zero exit: out/minikube-linux-amd64 -p guest-269272 ssh "test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'": context deadline exceeded (275ns)
iso_test.go:127: failed to verify existence of "/sys/kernel/btf/vmlinux" file: args "out/minikube-linux-amd64 -p guest-269272 ssh \"test -f /sys/kernel/btf/vmlinux && echo 'OK' || echo 'NOT FOUND'\"": context deadline exceeded
iso_test.go:131: expected file "/sys/kernel/btf/vmlinux" to exist, but it does not. BTF types are required for CO-RE eBPF programs; set CONFIG_DEBUG_INFO_BTF in kernel configuration.
--- FAIL: TestISOImage/eBPFSupport (0.00s)
E1219 03:54:44.846555    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:49.963757    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:54:54.464126    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    

Test pass (356/437)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 32.47
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.15
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.34.3/json-events 15.04
13 TestDownloadOnly/v1.34.3/preload-exists 0
17 TestDownloadOnly/v1.34.3/LogsDuration 0.07
18 TestDownloadOnly/v1.34.3/DeleteAll 0.15
19 TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds 0.13
21 TestDownloadOnly/v1.35.0-rc.1/json-events 13.26
22 TestDownloadOnly/v1.35.0-rc.1/preload-exists 0
26 TestDownloadOnly/v1.35.0-rc.1/LogsDuration 0.07
27 TestDownloadOnly/v1.35.0-rc.1/DeleteAll 0.15
28 TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds 0.14
30 TestBinaryMirror 0.62
31 TestOffline 86.91
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
36 TestAddons/Setup 207.96
38 TestAddons/serial/Volcano 43.31
40 TestAddons/serial/GCPAuth/Namespaces 0.11
41 TestAddons/serial/GCPAuth/FakeCredentials 11.59
44 TestAddons/parallel/Registry 19.65
45 TestAddons/parallel/RegistryCreds 0.7
46 TestAddons/parallel/Ingress 22.13
47 TestAddons/parallel/InspektorGadget 11.89
48 TestAddons/parallel/MetricsServer 6.2
50 TestAddons/parallel/CSI 52.3
51 TestAddons/parallel/Headlamp 21.69
52 TestAddons/parallel/CloudSpanner 5.6
53 TestAddons/parallel/LocalPath 59.5
54 TestAddons/parallel/NvidiaDevicePlugin 6.87
55 TestAddons/parallel/Yakd 11.98
57 TestAddons/StoppedEnableDisable 70.92
58 TestCertOptions 41.54
59 TestCertExpiration 326.78
61 TestForceSystemdFlag 92.63
62 TestForceSystemdEnv 57.79
67 TestErrorSpam/setup 38.58
68 TestErrorSpam/start 0.31
69 TestErrorSpam/status 0.67
70 TestErrorSpam/pause 1.43
71 TestErrorSpam/unpause 1.62
72 TestErrorSpam/stop 4.46
75 TestFunctional/serial/CopySyncFile 0
76 TestFunctional/serial/StartWithProxy 56.38
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 47.96
79 TestFunctional/serial/KubeContext 0.04
80 TestFunctional/serial/KubectlGetPods 0.12
83 TestFunctional/serial/CacheCmd/cache/add_remote 2.93
84 TestFunctional/serial/CacheCmd/cache/add_local 2.49
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
86 TestFunctional/serial/CacheCmd/cache/list 0.06
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
88 TestFunctional/serial/CacheCmd/cache/cache_reload 1.33
89 TestFunctional/serial/CacheCmd/cache/delete 0.11
90 TestFunctional/serial/MinikubeKubectlCmd 0.11
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
92 TestFunctional/serial/ExtraConfig 39.79
93 TestFunctional/serial/ComponentHealth 0.06
94 TestFunctional/serial/LogsCmd 1.21
95 TestFunctional/serial/LogsFileCmd 1.21
96 TestFunctional/serial/InvalidService 4.22
98 TestFunctional/parallel/ConfigCmd 0.38
100 TestFunctional/parallel/DryRun 0.21
101 TestFunctional/parallel/InternationalLanguage 0.13
102 TestFunctional/parallel/StatusCmd 0.75
106 TestFunctional/parallel/ServiceCmdConnect 23.49
107 TestFunctional/parallel/AddonsCmd 0.15
108 TestFunctional/parallel/PersistentVolumeClaim 40.67
110 TestFunctional/parallel/SSHCmd 0.3
111 TestFunctional/parallel/CpCmd 1.08
112 TestFunctional/parallel/MySQL 32.35
113 TestFunctional/parallel/FileSync 0.17
114 TestFunctional/parallel/CertSync 1.09
118 TestFunctional/parallel/NodeLabels 0.06
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
122 TestFunctional/parallel/License 0.7
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.19
124 TestFunctional/parallel/Version/short 0.06
125 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
126 TestFunctional/parallel/Version/components 0.77
127 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
128 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
129 TestFunctional/parallel/ImageCommands/ImageBuild 6.01
130 TestFunctional/parallel/ImageCommands/Setup 2.43
140 TestFunctional/parallel/UpdateContextCmd/no_changes 0.06
141 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.07
142 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.06
143 TestFunctional/parallel/ProfileCmd/profile_not_create 0.31
144 TestFunctional/parallel/ProfileCmd/profile_list 0.3
145 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
146 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.31
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.2
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 2.67
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
150 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.75
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.4
153 TestFunctional/parallel/ServiceCmd/DeployApp 18.24
154 TestFunctional/parallel/MountCmd/any-port 10.14
155 TestFunctional/parallel/ServiceCmd/List 0.46
156 TestFunctional/parallel/ServiceCmd/JSONOutput 0.46
157 TestFunctional/parallel/ServiceCmd/HTTPS 0.29
158 TestFunctional/parallel/ServiceCmd/Format 0.28
159 TestFunctional/parallel/ServiceCmd/URL 0.27
160 TestFunctional/parallel/MountCmd/specific-port 1.82
161 TestFunctional/parallel/MountCmd/VerifyCleanup 0.97
162 TestFunctional/delete_echo-server_images 0.04
163 TestFunctional/delete_my-image_image 0.02
164 TestFunctional/delete_minikube_cached_images 0.02
168 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile 0
169 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy 77.97
170 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog 0
171 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart 47.84
172 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext 0.04
173 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods 0.12
176 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote 2.75
177 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local 2.39
178 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete 0.06
179 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list 0.06
180 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node 0.18
181 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload 1.35
182 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete 0.12
183 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd 0.11
184 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly 0.11
185 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig 41.88
186 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth 0.06
187 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd 1.22
188 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd 1.22
189 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService 4.28
191 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd 0.42
193 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun 0.23
194 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage 0.12
195 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd 0.79
199 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect 15.69
200 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd 0.16
201 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim 54.27
203 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd 0.4
204 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd 1.12
205 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL 48.56
206 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync 0.17
207 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync 1.06
211 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels 0.06
213 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled 0.36
215 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License 0.62
216 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short 0.06
217 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create 0.4
218 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components 0.47
219 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port 9.31
220 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list 0.37
221 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output 0.34
231 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port 1.41
232 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup 1.35
233 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp 38.14
234 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort 0.21
235 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable 0.18
236 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson 0.18
237 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml 0.19
238 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild 7.06
239 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup 1.15
240 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon 1.14
241 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon 1.16
242 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon 2.28
243 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile 0.33
244 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove 0.38
245 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile 0.59
246 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon 0.37
247 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes 0.08
248 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster 0.09
249 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters 0.08
250 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List 2.42
251 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput 2.43
252 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS 0.43
253 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format 0.3
254 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL 0.25
255 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images 0.04
256 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image 0.02
257 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images 0.02
261 TestMultiControlPlane/serial/StartCluster 206.17
262 TestMultiControlPlane/serial/DeployApp 9
263 TestMultiControlPlane/serial/PingHostFromPods 1.25
264 TestMultiControlPlane/serial/AddWorkerNode 48.96
265 TestMultiControlPlane/serial/NodeLabels 0.07
266 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.67
267 TestMultiControlPlane/serial/CopyFile 10.67
268 TestMultiControlPlane/serial/StopSecondaryNode 85.16
269 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.49
270 TestMultiControlPlane/serial/RestartSecondaryNode 26.42
271 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.84
272 TestMultiControlPlane/serial/RestartClusterKeepsNodes 389.72
273 TestMultiControlPlane/serial/DeleteSecondaryNode 6.51
274 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.51
275 TestMultiControlPlane/serial/StopCluster 248.03
276 TestMultiControlPlane/serial/RestartCluster 80.61
277 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.48
278 TestMultiControlPlane/serial/AddSecondaryNode 98.62
279 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.66
284 TestJSONOutput/start/Command 79.15
285 TestJSONOutput/start/Audit 0
287 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
288 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
290 TestJSONOutput/pause/Command 0.69
291 TestJSONOutput/pause/Audit 0
293 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/unpause/Command 0.59
297 TestJSONOutput/unpause/Audit 0
299 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/stop/Command 7.16
303 TestJSONOutput/stop/Audit 0
305 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
307 TestErrorJSONOutput 0.22
312 TestMainNoArgs 0.06
313 TestMinikubeProfile 82.08
316 TestMountStart/serial/StartWithMountFirst 20.56
317 TestMountStart/serial/VerifyMountFirst 0.29
318 TestMountStart/serial/StartWithMountSecond 20.92
319 TestMountStart/serial/VerifyMountSecond 0.3
320 TestMountStart/serial/DeleteFirst 0.69
321 TestMountStart/serial/VerifyMountPostDelete 0.31
322 TestMountStart/serial/Stop 1.28
323 TestMountStart/serial/RestartStopped 19.43
324 TestMountStart/serial/VerifyMountPostStop 0.29
327 TestMultiNode/serial/FreshStart2Nodes 106.97
328 TestMultiNode/serial/DeployApp2Nodes 6.92
329 TestMultiNode/serial/PingHostFrom2Pods 0.82
330 TestMultiNode/serial/AddNode 44.09
331 TestMultiNode/serial/MultiNodeLabels 0.06
332 TestMultiNode/serial/ProfileList 0.45
333 TestMultiNode/serial/CopyFile 5.88
334 TestMultiNode/serial/StopNode 2.04
335 TestMultiNode/serial/StartAfterStop 36.47
336 TestMultiNode/serial/RestartKeepsNodes 296.64
337 TestMultiNode/serial/DeleteNode 1.99
338 TestMultiNode/serial/StopMultiNode 173.67
339 TestMultiNode/serial/RestartMultiNode 76.08
340 TestMultiNode/serial/ValidateNameConflict 39.37
345 TestPreload 142.64
347 TestScheduledStopUnix 110.82
351 TestRunningBinaryUpgrade 147.73
353 TestKubernetesUpgrade 145.03
356 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
357 TestNoKubernetes/serial/StartWithK8s 80.15
358 TestNoKubernetes/serial/StartWithStopK8s 43.96
359 TestNoKubernetes/serial/Start 23.67
363 TestISOImage/Setup 20.86
365 TestPause/serial/Start 100.87
366 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
367 TestNoKubernetes/serial/VerifyK8sNotRunning 0.18
368 TestNoKubernetes/serial/ProfileList 0.49
369 TestNoKubernetes/serial/Stop 1.48
374 TestNetworkPlugins/group/false 3.49
375 TestNoKubernetes/serial/StartNoArgs 50.74
398 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.17
399 TestStoppedBinaryUpgrade/Setup 5.17
400 TestStoppedBinaryUpgrade/Upgrade 99.61
401 TestPause/serial/SecondStartNoReconfiguration 49.24
402 TestPause/serial/Pause 0.85
403 TestPause/serial/VerifyStatus 0.25
404 TestPause/serial/Unpause 0.66
405 TestPause/serial/PauseAgain 0.95
406 TestNetworkPlugins/group/auto/Start 81.3
407 TestPause/serial/DeletePaused 1.8
408 TestPause/serial/VerifyDeletedResources 2.53
409 TestNetworkPlugins/group/kindnet/Start 81.21
410 TestStoppedBinaryUpgrade/MinikubeLogs 1.31
411 TestNetworkPlugins/group/calico/Start 104.24
412 TestNetworkPlugins/group/auto/KubeletFlags 0.18
413 TestNetworkPlugins/group/auto/NetCatPod 9.27
414 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
415 TestNetworkPlugins/group/auto/DNS 0.16
416 TestNetworkPlugins/group/auto/Localhost 0.13
417 TestNetworkPlugins/group/auto/HairPin 0.14
418 TestNetworkPlugins/group/kindnet/KubeletFlags 0.19
419 TestNetworkPlugins/group/kindnet/NetCatPod 10.35
420 TestNetworkPlugins/group/kindnet/DNS 0.18
421 TestNetworkPlugins/group/kindnet/Localhost 0.15
422 TestNetworkPlugins/group/kindnet/HairPin 0.14
423 TestNetworkPlugins/group/custom-flannel/Start 83.16
424 TestNetworkPlugins/group/calico/ControllerPod 6.01
425 TestNetworkPlugins/group/enable-default-cni/Start 93.61
426 TestNetworkPlugins/group/calico/KubeletFlags 0.19
427 TestNetworkPlugins/group/calico/NetCatPod 11.23
428 TestNetworkPlugins/group/calico/DNS 0.18
429 TestNetworkPlugins/group/calico/Localhost 0.12
430 TestNetworkPlugins/group/calico/HairPin 0.15
431 TestNetworkPlugins/group/flannel/Start 72.58
432 TestNetworkPlugins/group/bridge/Start 86.72
433 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.21
434 TestNetworkPlugins/group/custom-flannel/NetCatPod 9.26
435 TestNetworkPlugins/group/custom-flannel/DNS 0.16
436 TestNetworkPlugins/group/custom-flannel/Localhost 0.14
437 TestNetworkPlugins/group/custom-flannel/HairPin 0.21
438 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.18
439 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.25
441 TestStartStop/group/old-k8s-version/serial/FirstStart 100.49
442 TestNetworkPlugins/group/flannel/ControllerPod 6.01
443 TestNetworkPlugins/group/enable-default-cni/DNS 0.18
444 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
445 TestNetworkPlugins/group/enable-default-cni/HairPin 0.13
446 TestNetworkPlugins/group/flannel/KubeletFlags 0.2
447 TestNetworkPlugins/group/flannel/NetCatPod 10.27
448 TestNetworkPlugins/group/flannel/DNS 0.17
449 TestNetworkPlugins/group/flannel/Localhost 0.15
450 TestNetworkPlugins/group/flannel/HairPin 0.15
452 TestStartStop/group/no-preload/serial/FirstStart 93.13
454 TestStartStop/group/embed-certs/serial/FirstStart 95.02
455 TestNetworkPlugins/group/bridge/KubeletFlags 0.2
456 TestNetworkPlugins/group/bridge/NetCatPod 10.28
457 TestNetworkPlugins/group/bridge/DNS 0.4
458 TestNetworkPlugins/group/bridge/Localhost 0.15
459 TestNetworkPlugins/group/bridge/HairPin 0.2
461 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 90.05
462 TestStartStop/group/old-k8s-version/serial/DeployApp 12.3
463 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.29
464 TestStartStop/group/old-k8s-version/serial/Stop 81.35
465 TestStartStop/group/no-preload/serial/DeployApp 13.28
466 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.04
467 TestStartStop/group/no-preload/serial/Stop 72
468 TestStartStop/group/embed-certs/serial/DeployApp 12.39
469 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.94
470 TestStartStop/group/embed-certs/serial/Stop 85.21
471 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 12.26
472 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.92
473 TestStartStop/group/default-k8s-diff-port/serial/Stop 81.71
474 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.14
475 TestStartStop/group/old-k8s-version/serial/SecondStart 58.16
476 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.14
477 TestStartStop/group/no-preload/serial/SecondStart 58.68
478 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.15
479 TestStartStop/group/embed-certs/serial/SecondStart 59.24
482 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.29
483 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 61.55
490 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.21
491 TestStartStop/group/old-k8s-version/serial/Pause 2.93
493 TestStartStop/group/newest-cni/serial/FirstStart 47.37
494 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.21
495 TestStartStop/group/no-preload/serial/Pause 2.97
506 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
507 TestStartStop/group/embed-certs/serial/Pause 2.99
508 TestStartStop/group/newest-cni/serial/DeployApp 0
509 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.96
510 TestStartStop/group/newest-cni/serial/Stop 85.41
511 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.19
512 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.63
513 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.14
514 TestStartStop/group/newest-cni/serial/SecondStart 39.73
515 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
516 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
517 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.2
518 TestStartStop/group/newest-cni/serial/Pause 2.39
x
+
TestDownloadOnly/v1.28.0/json-events (32.47s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-295046 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-295046 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (32.467920325s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (32.47s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1219 02:25:45.690714    8978 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
I1219 02:25:45.690812    8978 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-295046
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-295046: exit status 85 (67.751398ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-295046 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-295046 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │          │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:13
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:13.273958    8990 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:13.274191    8990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:13.274201    8990 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:13.274205    8990 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:13.274369    8990 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	W1219 02:25:13.274491    8990 root.go:314] Error reading config file at /home/jenkins/minikube-integration/22230-5003/.minikube/config/config.json: open /home/jenkins/minikube-integration/22230-5003/.minikube/config/config.json: no such file or directory
	I1219 02:25:13.274928    8990 out.go:368] Setting JSON to true
	I1219 02:25:13.275837    8990 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":452,"bootTime":1766110661,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:13.275893    8990 start.go:143] virtualization: kvm guest
	I1219 02:25:13.281021    8990 out.go:99] [download-only-295046] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:13.281161    8990 notify.go:221] Checking for updates...
	W1219 02:25:13.281204    8990 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball: no such file or directory
	I1219 02:25:13.282249    8990 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:25:13.283307    8990 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:13.284392    8990 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:25:13.285592    8990 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:25:13.286735    8990 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:25:13.288574    8990 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:25:13.288776    8990 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:13.756110    8990 out.go:99] Using the kvm2 driver based on user configuration
	I1219 02:25:13.756159    8990 start.go:309] selected driver: kvm2
	I1219 02:25:13.756167    8990 start.go:928] validating driver "kvm2" against <nil>
	I1219 02:25:13.756573    8990 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:13.757292    8990 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1219 02:25:13.757482    8990 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:25:13.757518    8990 cni.go:84] Creating CNI manager for ""
	I1219 02:25:13.757581    8990 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 02:25:13.757594    8990 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:13.757659    8990 start.go:353] cluster config:
	{Name:download-only-295046 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-295046 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:13.757881    8990 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:25:13.759304    8990 out.go:99] Downloading VM boot image ...
	I1219 02:25:13.759336    8990 download.go:108] Downloading: https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso.sha256 -> /home/jenkins/minikube-integration/22230-5003/.minikube/cache/iso/amd64/minikube-v1.37.0-1765965980-22186-amd64.iso
	I1219 02:25:28.579139    8990 out.go:99] Starting "download-only-295046" primary control-plane node in "download-only-295046" cluster
	I1219 02:25:28.579181    8990 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1219 02:25:28.731229    8990 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	I1219 02:25:28.731267    8990 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:28.731435    8990 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime containerd
	I1219 02:25:28.733139    8990 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1219 02:25:28.733154    8990 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1219 02:25:29.420589    8990 preload.go:295] Got checksum from GCS API "2746dfda401436a5341e0500068bf339"
	I1219 02:25:29.420701    8990 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:2746dfda401436a5341e0500068bf339 -> /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-295046 host does not exist
	  To start a cluster, run: "minikube start -p download-only-295046"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-295046
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/json-events (15.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-473979 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-473979 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (15.037333683s)
--- PASS: TestDownloadOnly/v1.34.3/json-events (15.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/preload-exists
I1219 02:26:01.075168    8978 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
I1219 02:26:01.075252    8978 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-473979
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-473979: exit status 85 (64.839165ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                        ARGS                                                                                         │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-295046 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-295046 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                               │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-295046                                                                                                                                                             │ download-only-295046 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-473979 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-473979 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:25:46
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:25:46.086133    9268 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:25:46.086346    9268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:46.086353    9268 out.go:374] Setting ErrFile to fd 2...
	I1219 02:25:46.086358    9268 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:25:46.086530    9268 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:25:46.086954    9268 out.go:368] Setting JSON to true
	I1219 02:25:46.087692    9268 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":485,"bootTime":1766110661,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:25:46.087740    9268 start.go:143] virtualization: kvm guest
	I1219 02:25:46.089413    9268 out.go:99] [download-only-473979] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:25:46.089518    9268 notify.go:221] Checking for updates...
	I1219 02:25:46.090437    9268 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:25:46.091487    9268 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:25:46.092449    9268 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:25:46.093342    9268 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:25:46.094236    9268 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:25:46.095981    9268 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:25:46.096182    9268 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:25:46.124733    9268 out.go:99] Using the kvm2 driver based on user configuration
	I1219 02:25:46.124760    9268 start.go:309] selected driver: kvm2
	I1219 02:25:46.124768    9268 start.go:928] validating driver "kvm2" against <nil>
	I1219 02:25:46.125059    9268 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:25:46.125512    9268 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1219 02:25:46.125659    9268 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:25:46.125689    9268 cni.go:84] Creating CNI manager for ""
	I1219 02:25:46.125746    9268 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 02:25:46.125758    9268 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 02:25:46.125813    9268 start.go:353] cluster config:
	{Name:download-only-473979 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.3 ClusterName:download-only-473979 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:25:46.125907    9268 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:25:46.126920    9268 out.go:99] Starting "download-only-473979" primary control-plane node in "download-only-473979" cluster
	I1219 02:25:46.126944    9268 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 02:25:46.798522    9268 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	I1219 02:25:46.798552    9268 cache.go:65] Caching tarball of preloaded images
	I1219 02:25:46.798687    9268 preload.go:188] Checking if preload exists for k8s version v1.34.3 and runtime containerd
	I1219 02:25:46.800105    9268 out.go:99] Downloading Kubernetes v1.34.3 preload ...
	I1219 02:25:46.800123    9268 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1219 02:25:47.485117    9268 preload.go:295] Got checksum from GCS API "8ed8b49ee38344137d62ea681aa755ac"
	I1219 02:25:47.485167    9268 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.34.3/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4?checksum=md5:8ed8b49ee38344137d62ea681aa755ac -> /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.3-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-473979 host does not exist
	  To start a cluster, run: "minikube start -p download-only-473979"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.3/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.3/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-473979
--- PASS: TestDownloadOnly/v1.34.3/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/json-events (13.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-201194 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-201194 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd: (13.258518308s)
--- PASS: TestDownloadOnly/v1.35.0-rc.1/json-events (13.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/preload-exists
I1219 02:26:14.679117    8978 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
I1219 02:26:14.679156    8978 preload.go:203] Found local preload: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-rc.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-201194
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-201194: exit status 85 (65.652414ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                           ARGS                                                                                           │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-295046 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd      │ download-only-295046 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ delete  │ -p download-only-295046                                                                                                                                                                  │ download-only-295046 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │ 19 Dec 25 02:25 UTC │
	│ start   │ -o=json --download-only -p download-only-473979 --force --alsologtostderr --kubernetes-version=v1.34.3 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd      │ download-only-473979 │ jenkins │ v1.37.0 │ 19 Dec 25 02:25 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                    │ minikube             │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │ 19 Dec 25 02:26 UTC │
	│ delete  │ -p download-only-473979                                                                                                                                                                  │ download-only-473979 │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │ 19 Dec 25 02:26 UTC │
	│ start   │ -o=json --download-only -p download-only-201194 --force --alsologtostderr --kubernetes-version=v1.35.0-rc.1 --container-runtime=containerd --driver=kvm2  --container-runtime=containerd │ download-only-201194 │ jenkins │ v1.37.0 │ 19 Dec 25 02:26 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/19 02:26:01
	Running on machine: ubuntu-20-agent-5
	Binary: Built with gc go1.25.5 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1219 02:26:01.468574    9477 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:26:01.468859    9477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:01.468869    9477 out.go:374] Setting ErrFile to fd 2...
	I1219 02:26:01.468873    9477 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:26:01.469144    9477 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:26:01.469630    9477 out.go:368] Setting JSON to true
	I1219 02:26:01.470454    9477 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":500,"bootTime":1766110661,"procs":173,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:26:01.470502    9477 start.go:143] virtualization: kvm guest
	I1219 02:26:01.472318    9477 out.go:99] [download-only-201194] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:26:01.472442    9477 notify.go:221] Checking for updates...
	I1219 02:26:01.473899    9477 out.go:171] MINIKUBE_LOCATION=22230
	I1219 02:26:01.475200    9477 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:26:01.476400    9477 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:26:01.477408    9477 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:26:01.478461    9477 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1219 02:26:01.480417    9477 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1219 02:26:01.480630    9477 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:26:01.509149    9477 out.go:99] Using the kvm2 driver based on user configuration
	I1219 02:26:01.509189    9477 start.go:309] selected driver: kvm2
	I1219 02:26:01.509197    9477 start.go:928] validating driver "kvm2" against <nil>
	I1219 02:26:01.509475    9477 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1219 02:26:01.509967    9477 start_flags.go:411] Using suggested 6144MB memory alloc based on sys=32093MB, container=0MB
	I1219 02:26:01.510147    9477 start_flags.go:975] Wait components to verify : map[apiserver:true system_pods:true]
	I1219 02:26:01.510178    9477 cni.go:84] Creating CNI manager for ""
	I1219 02:26:01.510234    9477 cni.go:146] "kvm2" driver + "containerd" runtime found, recommending bridge
	I1219 02:26:01.510246    9477 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1219 02:26:01.510305    9477 start.go:353] cluster config:
	{Name:download-only-201194 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:6144 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-rc.1 ClusterName:download-only-201194 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:26:01.510397    9477 iso.go:125] acquiring lock: {Name:mk731c10e1c86af465f812178d88a0d6fc01b0cc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1219 02:26:01.511626    9477 out.go:99] Starting "download-only-201194" primary control-plane node in "download-only-201194" cluster
	I1219 02:26:01.511662    9477 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 02:26:02.180731    9477 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	I1219 02:26:02.180810    9477 cache.go:65] Caching tarball of preloaded images
	I1219 02:26:02.181048    9477 preload.go:188] Checking if preload exists for k8s version v1.35.0-rc.1 and runtime containerd
	I1219 02:26:02.182618    9477 out.go:99] Downloading Kubernetes v1.35.0-rc.1 preload ...
	I1219 02:26:02.182632    9477 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4 from gcs api...
	I1219 02:26:02.335976    9477 preload.go:295] Got checksum from GCS API "ffe652c02cd8d6c779ed399620f0c4bd"
	I1219 02:26:02.336037    9477 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-rc.1/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:ffe652c02cd8d6c779ed399620f0c4bd -> /home/jenkins/minikube-integration/22230-5003/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.35.0-rc.1-containerd-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-201194 host does not exist
	  To start a cluster, run: "minikube start -p download-only-201194"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-rc.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-201194
--- PASS: TestDownloadOnly/v1.35.0-rc.1/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.62s)

                                                
                                                
=== RUN   TestBinaryMirror
I1219 02:26:15.432411    8978 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.3/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-737964 --alsologtostderr --binary-mirror http://127.0.0.1:45471 --driver=kvm2  --container-runtime=containerd
helpers_test.go:176: Cleaning up "binary-mirror-737964" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-737964
--- PASS: TestBinaryMirror (0.62s)

                                                
                                    
x
+
TestOffline (86.91s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-638178 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-638178 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (1m25.613584846s)
helpers_test.go:176: Cleaning up "offline-containerd-638178" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-638178
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-638178: (1.300588571s)
--- PASS: TestOffline (86.91s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-925443
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-925443: exit status 85 (60.481114ms)

                                                
                                                
-- stdout --
	* Profile "addons-925443" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925443"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-925443
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-925443: exit status 85 (61.069633ms)

                                                
                                                
-- stdout --
	* Profile "addons-925443" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-925443"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (207.96s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-linux-amd64 start -p addons-925443 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-linux-amd64 start -p addons-925443 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=kvm2  --container-runtime=containerd --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (3m27.962730336s)
--- PASS: TestAddons/Setup (207.96s)

                                                
                                    
x
+
TestAddons/serial/Volcano (43.31s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:886: volcano-controller stabilized in 29.888372ms
addons_test.go:878: volcano-admission stabilized in 29.961684ms
addons_test.go:870: volcano-scheduler stabilized in 30.128775ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-xs2hb" [7ad2e002-eff9-4e19-85ff-e5e723ea9729] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 6.003293986s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-899m6" [3efaad7b-3d12-49f1-915a-f5711c68c180] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.003233392s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-gg6cr" [a2504520-3248-4275-a308-e1034d2ee084] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 5.003495324s
addons_test.go:905: (dbg) Run:  kubectl --context addons-925443 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-925443 create -f testdata/vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-925443 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [20a11be3-b2fd-4018-85d0-50873ee32360] Pending
helpers_test.go:353: "test-job-nginx-0" [20a11be3-b2fd-4018-85d0-50873ee32360] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [20a11be3-b2fd-4018-85d0-50873ee32360] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 15.004146081s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable volcano --alsologtostderr -v=1: (11.909472929s)
--- PASS: TestAddons/serial/Volcano (43.31s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-925443 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-925443 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (11.59s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-925443 create -f testdata/busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-925443 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1a9fd2d5-0370-4e06-a472-df3ec641e196] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1a9fd2d5-0370-4e06-a472-df3ec641e196] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 11.0053981s
addons_test.go:696: (dbg) Run:  kubectl --context addons-925443 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-925443 describe sa gcp-auth-test
addons_test.go:746: (dbg) Run:  kubectl --context addons-925443 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (11.59s)

                                                
                                    
x
+
TestAddons/parallel/Registry (19.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 7.175797ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-x6d56" [9e9708bd-091c-4b1f-9f6c-a8047ec0d0a1] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003733088s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-bxf8l" [3a4ece9f-3030-4ac5-a02e-42999b4771f7] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.003015333s
addons_test.go:394: (dbg) Run:  kubectl --context addons-925443 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-925443 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-925443 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.913685172s)
addons_test.go:413: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 ip
2025/12/19 02:31:07 [DEBUG] GET http://192.168.39.94:5000
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (19.65s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.7s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 9.212913ms
addons_test.go:327: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-925443
addons_test.go:334: (dbg) Run:  kubectl --context addons-925443 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.70s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (22.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-925443 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-925443 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-925443 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [30c3e14b-8d51-42ed-a0b0-0068dd7f3c0a] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [30c3e14b-8d51-42ed-a0b0-0068dd7f3c0a] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.003008874s
I1219 02:31:22.866383    8978 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:290: (dbg) Run:  kubectl --context addons-925443 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:295: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 ip
addons_test.go:301: (dbg) Run:  nslookup hello-john.test 192.168.39.94
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable ingress-dns --alsologtostderr -v=1: (1.172583628s)
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable ingress --alsologtostderr -v=1: (7.823884458s)
--- PASS: TestAddons/parallel/Ingress (22.13s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-ftsgl" [cfe9271a-2ed2-45dc-909f-bb611179bc75] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004609777s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable inspektor-gadget --alsologtostderr -v=1: (5.883224329s)
--- PASS: TestAddons/parallel/InspektorGadget (11.89s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.2s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 7.230459ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-chpvn" [52c217bc-7eb9-4f1c-9e9e-7602c8d67935] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.007481305s
addons_test.go:465: (dbg) Run:  kubectl --context addons-925443 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable metrics-server --alsologtostderr -v=1: (1.104409754s)
--- PASS: TestAddons/parallel/MetricsServer (6.20s)

                                                
                                    
x
+
TestAddons/parallel/CSI (52.3s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1219 02:31:08.378676    8978 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1219 02:31:08.389918    8978 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1219 02:31:08.389950    8978 kapi.go:107] duration metric: took 11.281617ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 11.296137ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-925443 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-925443 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [fc122e62-332a-4db6-a899-f2743081fb5b] Pending
helpers_test.go:353: "task-pv-pod" [fc122e62-332a-4db6-a899-f2743081fb5b] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.003069803s
addons_test.go:574: (dbg) Run:  kubectl --context addons-925443 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-925443 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:436: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:428: (dbg) Run:  kubectl --context addons-925443 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-925443 delete pod task-pv-pod
addons_test.go:590: (dbg) Run:  kubectl --context addons-925443 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-925443 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-925443 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [5d5e458f-cf1b-40b8-ad50-46c9b58c014e] Pending
helpers_test.go:353: "task-pv-pod-restore" [5d5e458f-cf1b-40b8-ad50-46c9b58c014e] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003787207s
addons_test.go:616: (dbg) Run:  kubectl --context addons-925443 delete pod task-pv-pod-restore
addons_test.go:620: (dbg) Run:  kubectl --context addons-925443 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-925443 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.8089442s)
--- PASS: TestAddons/parallel/CSI (52.30s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (21.69s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-925443 --alsologtostderr -v=1
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-644w6" [2bc8bf28-8e33-4ba5-aabb-f9ca7c16c9b4] Pending
helpers_test.go:353: "headlamp-dfcdc64b-644w6" [2bc8bf28-8e33-4ba5-aabb-f9ca7c16c9b4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-644w6" [2bc8bf28-8e33-4ba5-aabb-f9ca7c16c9b4] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 15.003926377s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable headlamp --alsologtostderr -v=1: (5.820319469s)
--- PASS: TestAddons/parallel/Headlamp (21.69s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.6s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-4jqvx" [186ba615-c737-4314-8802-a382800a50b2] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.006851148s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.60s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (59.5s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-925443 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-925443 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [72a0e738-19f3-4a90-85c0-3f1699ce3da2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [72a0e738-19f3-4a90-85c0-3f1699ce3da2] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [72a0e738-19f3-4a90-85c0-3f1699ce3da2] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.003303015s
addons_test.go:969: (dbg) Run:  kubectl --context addons-925443 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 ssh "cat /opt/local-path-provisioner/pvc-d3ecd159-313f-4235-8e70-f2ba341ca0db_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-925443 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-925443 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.656141082s)
--- PASS: TestAddons/parallel/LocalPath (59.50s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.87s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-kvmhv" [d84403db-564b-4c60-84cb-6d98e0baa755] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.005682838s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.87s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-6654c87f9b-f9t8q" [ceea4498-f632-4424-8374-80abf3ad7b96] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002938476s
addons_test.go:1055: (dbg) Run:  out/minikube-linux-amd64 -p addons-925443 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-linux-amd64 -p addons-925443 addons disable yakd --alsologtostderr -v=1: (5.975824754s)
--- PASS: TestAddons/parallel/Yakd (11.98s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (70.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-925443
addons_test.go:174: (dbg) Done: out/minikube-linux-amd64 stop -p addons-925443: (1m10.729727339s)
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-925443
addons_test.go:182: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-925443
addons_test.go:187: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-925443
--- PASS: TestAddons/StoppedEnableDisable (70.92s)

                                                
                                    
x
+
TestCertOptions (41.54s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-323829 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-323829 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=containerd: (40.240930662s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-323829 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-323829 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-323829 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-323829" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-323829
--- PASS: TestCertOptions (41.54s)

                                                
                                    
x
+
TestCertExpiration (326.78s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-879517 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-879517 --memory=3072 --cert-expiration=3m --driver=kvm2  --container-runtime=containerd: (1m26.873006395s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-879517 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-879517 --memory=3072 --cert-expiration=8760h --driver=kvm2  --container-runtime=containerd: (58.639455687s)
helpers_test.go:176: Cleaning up "cert-expiration-879517" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-879517
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-879517: (1.266591397s)
--- PASS: TestCertExpiration (326.78s)

                                                
                                    
x
+
TestForceSystemdFlag (92.63s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-718725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1219 03:26:39.041972    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-718725 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m31.484772594s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-718725 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-flag-718725" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-718725
--- PASS: TestForceSystemdFlag (92.63s)

                                                
                                    
x
+
TestForceSystemdEnv (57.79s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-071438 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-071438 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (56.240852485s)
docker_test.go:121: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-071438 ssh "cat /etc/containerd/config.toml"
helpers_test.go:176: Cleaning up "force-systemd-env-071438" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-071438
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-071438: (1.352068031s)
--- PASS: TestForceSystemdEnv (57.79s)

                                                
                                    
x
+
TestErrorSpam/setup (38.58s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-194702 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-194702 --driver=kvm2  --container-runtime=containerd
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-194702 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-194702 --driver=kvm2  --container-runtime=containerd: (38.57682846s)
--- PASS: TestErrorSpam/setup (38.58s)

                                                
                                    
x
+
TestErrorSpam/start (0.31s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 start --dry-run
--- PASS: TestErrorSpam/start (0.31s)

                                                
                                    
x
+
TestErrorSpam/status (0.67s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 status
--- PASS: TestErrorSpam/status (0.67s)

                                                
                                    
x
+
TestErrorSpam/pause (1.43s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 pause
--- PASS: TestErrorSpam/pause (1.43s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.62s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 unpause
--- PASS: TestErrorSpam/unpause (1.62s)

                                                
                                    
x
+
TestErrorSpam/stop (4.46s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 stop: (1.655660556s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 stop: (1.273001525s)
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 stop
error_spam_test.go:172: (dbg) Done: out/minikube-linux-amd64 -p nospam-194702 --log_dir /tmp/nospam-194702 stop: (1.526848923s)
--- PASS: TestErrorSpam/stop (4.46s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/test/nested/copy/8978/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (56.38s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-991175 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd
E1219 02:34:44.846910    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:44.852259    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:44.862674    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:44.882978    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:44.923426    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:45.003821    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:45.164250    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:45.484882    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:46.125835    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:47.406318    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:49.968088    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:34:55.088789    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-991175 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd: (56.377224011s)
--- PASS: TestFunctional/serial/StartWithProxy (56.38s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (47.96s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1219 02:34:56.259122    8978 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-991175 --alsologtostderr -v=8
E1219 02:35:05.329630    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:35:25.809948    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-991175 --alsologtostderr -v=8: (47.956738284s)
functional_test.go:678: soft start took 47.957485905s for "functional-991175" cluster.
I1219 02:35:44.216196    8978 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/SoftStart (47.96s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-991175 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 cache add registry.k8s.io/pause:3.3: (1.011441242s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.93s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-991175 /tmp/TestFunctionalserialCacheCmdcacheadd_local2216843127/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cache add minikube-local-cache-test:functional-991175
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 cache add minikube-local-cache-test:functional-991175: (2.144571383s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cache delete minikube-local-cache-test:functional-991175
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-991175
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (169.647806ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 kubectl -- --context functional-991175 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-991175 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (39.79s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-991175 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 02:36:06.771704    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-991175 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (39.789657759s)
functional_test.go:776: restart took 39.789764521s for "functional-991175" cluster.
I1219 02:36:31.544434    8978 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestFunctional/serial/ExtraConfig (39.79s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-991175 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 logs: (1.206415711s)
--- PASS: TestFunctional/serial/LogsCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 logs --file /tmp/TestFunctionalserialLogsFileCmd3463294549/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 logs --file /tmp/TestFunctionalserialLogsFileCmd3463294549/001/logs.txt: (1.211702867s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.22s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-991175 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-991175
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-991175: exit status 115 (239.984267ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.176:31145 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-991175 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.22s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 config get cpus: exit status 14 (56.145306ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 config get cpus: exit status 14 (64.81781ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-991175 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-991175 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (111.102379ms)

                                                
                                                
-- stdout --
	* [functional-991175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:37:08.599400   15401 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:37:08.599629   15401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:37:08.599638   15401 out.go:374] Setting ErrFile to fd 2...
	I1219 02:37:08.599642   15401 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:37:08.599801   15401 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:37:08.600197   15401 out.go:368] Setting JSON to false
	I1219 02:37:08.601425   15401 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1168,"bootTime":1766110661,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:37:08.601489   15401 start.go:143] virtualization: kvm guest
	I1219 02:37:08.603462   15401 out.go:179] * [functional-991175] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:37:08.604666   15401 notify.go:221] Checking for updates...
	I1219 02:37:08.604693   15401 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:37:08.605973   15401 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:37:08.607637   15401 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:37:08.609050   15401 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:37:08.610232   15401 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:37:08.611349   15401 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:37:08.612845   15401 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:37:08.613327   15401 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:37:08.649717   15401 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:37:08.650921   15401 start.go:309] selected driver: kvm2
	I1219 02:37:08.650940   15401 start.go:928] validating driver "kvm2" against &{Name:functional-991175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-991175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:37:08.651094   15401 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:37:08.653417   15401 out.go:203] 
	W1219 02:37:08.654512   15401 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:37:08.655476   15401 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-991175 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-991175 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-991175 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd: exit status 23 (126.968116ms)

                                                
                                                
-- stdout --
	* [functional-991175] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:37:04.663929   15202 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:37:04.664262   15202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:37:04.664274   15202 out.go:374] Setting ErrFile to fd 2...
	I1219 02:37:04.664282   15202 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:37:04.664714   15202 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:37:04.665395   15202 out.go:368] Setting JSON to false
	I1219 02:37:04.666794   15202 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1164,"bootTime":1766110661,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:37:04.666878   15202 start.go:143] virtualization: kvm guest
	I1219 02:37:04.668232   15202 out.go:179] * [functional-991175] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:37:04.669625   15202 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:37:04.669663   15202 notify.go:221] Checking for updates...
	I1219 02:37:04.672263   15202 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:37:04.674040   15202 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:37:04.675445   15202 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:37:04.676577   15202 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:37:04.677688   15202 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:37:04.679162   15202 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:37:04.679580   15202 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:37:04.709961   15202 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1219 02:37:04.710712   15202 start.go:309] selected driver: kvm2
	I1219 02:37:04.710724   15202 start.go:928] validating driver "kvm2" against &{Name:functional-991175 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.34.3 ClusterName:functional-991175 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.176 Port:8441 KubernetesVersion:v1.34.3 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2
6280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:37:04.710822   15202 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:37:04.712631   15202 out.go:203] 
	W1219 02:37:04.713822   15202 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:37:04.715050   15202 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (23.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-991175 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-991175 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-czl7n" [0238d7b6-85df-4964-a6e6-7fb14714d248] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-czl7n" [0238d7b6-85df-4964-a6e6-7fb14714d248] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 23.009649516s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.176:32074
functional_test.go:1680: http://192.168.39.176:32074: success! body:
Request served by hello-node-connect-7d85dfc575-czl7n

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.176:32074
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctional/parallel/ServiceCmdConnect (23.49s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (40.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [0b827772-7dd9-4175-86bc-0507e1b78055] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004271107s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-991175 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-991175 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-991175 get pvc myclaim -o=json
I1219 02:36:45.305602    8978 retry.go:31] will retry after 1.997193852s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:b82f924b-5325-422b-8523-48d89d9a1c47 ResourceVersion:739 Generation:0 CreationTimestamp:2025-12-19 02:36:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc00541df00 VolumeMode:0xc00541df10 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-991175 get pvc myclaim -o=json
I1219 02:36:47.375777    8978 retry.go:31] will retry after 2.86795673s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:b82f924b-5325-422b-8523-48d89d9a1c47 ResourceVersion:739 Generation:0 CreationTimestamp:2025-12-19 02:36:45 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc001cb1860 VolumeMode:0xc001cb1870 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-991175 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-991175 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:36:50.531616    8978 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [baeb14fb-f2fe-4e73-b13a-f6ae1bf94ed4] Pending
helpers_test.go:353: "sp-pod" [baeb14fb-f2fe-4e73-b13a-f6ae1bf94ed4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [baeb14fb-f2fe-4e73-b13a-f6ae1bf94ed4] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 20.01084674s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-991175 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-991175 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-991175 delete -f testdata/storage-provisioner/pod.yaml: (1.791824272s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-991175 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:37:12.607652    8978 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [f4d8f0f0-a770-4556-8fcb-88c02bcdb4a9] Pending
helpers_test.go:353: "sp-pod" [f4d8f0f0-a770-4556-8fcb-88c02bcdb4a9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [f4d8f0f0-a770-4556-8fcb-88c02bcdb4a9] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.004023169s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-991175 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (40.67s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh -n functional-991175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cp functional-991175:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3844836246/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh -n functional-991175 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh -n functional-991175 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (32.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-991175 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-554d8" [9ea23ba6-0bd8-4e5f-90c6-7037d545eb69] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-554d8" [9ea23ba6-0bd8-4e5f-90c6-7037d545eb69] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 24.005556864s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;": exit status 1 (266.21594ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:37:03.313819    8978 retry.go:31] will retry after 519.600144ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;": exit status 1 (245.048893ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:37:04.079037    8978 retry.go:31] will retry after 1.202883236s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;": exit status 1 (174.425684ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:37:05.457316    8978 retry.go:31] will retry after 1.146159552s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;": exit status 1 (222.823732ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:37:06.827564    8978 retry.go:31] will retry after 3.996428334s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-991175 exec mysql-6bcdcbc558-554d8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (32.35s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8978/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo cat /etc/test/nested/copy/8978/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8978.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo cat /etc/ssl/certs/8978.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8978.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo cat /usr/share/ca-certificates/8978.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo cat /etc/ssl/certs/89782.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo cat /usr/share/ca-certificates/89782.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-991175 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh "sudo systemctl is-active docker": exit status 1 (177.928547ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh "sudo systemctl is-active crio": exit status 1 (175.088767ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-991175 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.3
registry.k8s.io/kube-proxy:v1.34.3
registry.k8s.io/kube-controller-manager:v1.34.3
registry.k8s.io/kube-apiserver:v1.34.3
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
public.ecr.aws/docker/library/mysql:8.4
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-991175
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-991175
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-991175 image ls --format short --alsologtostderr:
I1219 02:37:12.272417   15530 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:12.272513   15530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:12.272517   15530 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:12.272521   15530 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:12.272725   15530 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:12.273239   15530 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:12.273331   15530 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:12.275260   15530 ssh_runner.go:195] Run: systemctl --version
I1219 02:37:12.277313   15530 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:12.277696   15530 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:12.277721   15530 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:12.277878   15530 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:12.370510   15530 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-991175 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.5-0            │ sha256:a3e246 │ 22.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.3            │ sha256:aec12d │ 17.4MB │
│ docker.io/kicbase/echo-server               │ functional-991175  │ sha256:9056ab │ 2.37MB │
│ docker.io/kicbase/echo-server               │ latest             │ sha256:9056ab │ 2.37MB │
│ public.ecr.aws/nginx/nginx                  │ alpine             │ sha256:04da2b │ 23MB   │
│ registry.k8s.io/kube-apiserver              │ v1.34.3            │ sha256:aa2709 │ 27.1MB │
│ registry.k8s.io/kube-proxy                  │ v1.34.3            │ sha256:36eef8 │ 26MB   │
│ registry.k8s.io/pause                       │ 3.3                │ sha256:0184c1 │ 298kB  │
│ registry.k8s.io/pause                       │ latest             │ sha256:350b16 │ 72.3kB │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ public.ecr.aws/docker/library/mysql         │ 8.4                │ sha256:20d0be │ 233MB  │
│ registry.k8s.io/pause                       │ 3.1                │ sha256:da86e6 │ 315kB  │
│ docker.io/kindest/kindnetd                  │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ gcr.io/k8s-minikube/busybox                 │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ registry.k8s.io/kube-controller-manager     │ v1.34.3            │ sha256:5826b2 │ 22.8MB │
│ registry.k8s.io/pause                       │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/library/minikube-local-cache-test │ functional-991175  │ sha256:d580b3 │ 991B   │
│ registry.k8s.io/coredns/coredns             │ v1.12.1            │ sha256:52546a │ 22.4MB │
└─────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-991175 image ls --format table --alsologtostderr:
I1219 02:37:17.914904   15793 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:17.915202   15793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:17.915214   15793 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:17.915222   15793 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:17.915420   15793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:17.915998   15793 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:17.916138   15793 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:17.918873   15793 ssh_runner.go:195] Run: systemctl --version
I1219 02:37:17.921904   15793 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:17.922444   15793 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:17.922485   15793 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:17.922698   15793 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:18.027790   15793 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-991175 image ls --format json --alsologtostderr:
[{"id":"sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"22384805"},{"id":"sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691","repoDigests":["registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.3"],"size":"25964312"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899a
e1b342d328d30","repoDigests":["docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6"],"repoTags":["docker.io/kicbase/echo-server:functional-991175","docker.io/kicbase/echo-server:latest"],"size":"2372971"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":["registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534"],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"22871747"},{"id":"sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78","repoDigests":["registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2"],"repoTags":["registry.k8s.io/kube-schedule
r:v1.34.3"],"size":"17382979"},{"id":"sha256:d580b3ba11cebc73715205c06458fe8a597a52fffc55ff54ee152c6357e600e0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-991175"],"size":"991"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22996569"},{"id":"sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c","repoDigests":["registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.3"]
,"size":"27064672"},{"id":"sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.3"],"size":"22819474"},{"id":"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"},{"id":"sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438","repoDigests":["public.ecr.aws/docker/libra
ry/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233"],"repoTags":["public.ecr.aws/docker/library/mysql:8.4"],"size":"233030909"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-991175 image ls --format json --alsologtostderr:
I1219 02:37:17.699196   15782 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:17.699460   15782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:17.699474   15782 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:17.699480   15782 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:17.699812   15782 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:17.700513   15782 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:17.700679   15782 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:17.703174   15782 ssh_runner.go:195] Run: systemctl --version
I1219 02:37:17.705817   15782 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:17.706220   15782 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:17.706255   15782 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:17.706396   15782 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:17.803471   15782 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-991175 image ls --format yaml --alsologtostderr:
- id: sha256:d580b3ba11cebc73715205c06458fe8a597a52fffc55ff54ee152c6357e600e0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-991175
size: "991"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests:
- registry.k8s.io/etcd@sha256:042ef9c02799eb9303abf1aa99b09f09d94b8ee3ba0c2dd3f42dc4e1d3dce534
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "22871747"
- id: sha256:5826b25d990d7d314d236c8d128f43e443583891f5cdffa7bf8bca50ae9e0942
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.3
size: "22819474"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:20d0be4ee45242864913b12e7dc544f29f94117c9846c6a6b73d416670d42438
repoDigests:
- public.ecr.aws/docker/library/mysql@sha256:5cdee9be17b6b7c804980be29d1bb0ba1536c7afaaed679fe0c1578ea0e3c233
repoTags:
- public.ecr.aws/docker/library/mysql:8.4
size: "233030909"
- id: sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22996569"
- id: sha256:52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "22384805"
- id: sha256:aec12dadf56dd45659a682b94571f115a1be02ee4a262b3b5176394f5c030c78
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.3
size: "17382979"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:aa27095f5619377172f3d59289ccb2ba567ebea93a736d1705be068b2c030b0c
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.3
size: "27064672"
- id: sha256:36eef8e07bdd6abdc2bbf44041e49480fe499a3cedb0ae054b50daa1a35cf691
repoDigests:
- registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6
repoTags:
- registry.k8s.io/kube-proxy:v1.34.3
size: "25964312"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6
repoTags:
- docker.io/kicbase/echo-server:functional-991175
- docker.io/kicbase/echo-server:latest
size: "2372971"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-991175 image ls --format yaml --alsologtostderr:
I1219 02:37:12.470126   15541 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:12.470453   15541 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:12.470468   15541 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:12.470475   15541 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:12.470776   15541 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:12.471663   15541 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:12.471836   15541 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:12.474543   15541 ssh_runner.go:195] Run: systemctl --version
I1219 02:37:12.477078   15541 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:12.477564   15541 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:12.477622   15541 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:12.477810   15541 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:12.569059   15541 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (6.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh pgrep buildkitd: exit status 1 (156.652143ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image build -t localhost/my-image:functional-991175 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 image build -t localhost/my-image:functional-991175 testdata/build --alsologtostderr: (5.670991564s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-991175 image build -t localhost/my-image:functional-991175 testdata/build --alsologtostderr:
I1219 02:37:12.835465   15574 out.go:360] Setting OutFile to fd 1 ...
I1219 02:37:12.835636   15574 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:12.835648   15574 out.go:374] Setting ErrFile to fd 2...
I1219 02:37:12.835652   15574 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:37:12.835874   15574 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:37:12.836402   15574 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:12.837071   15574 config.go:182] Loaded profile config "functional-991175": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
I1219 02:37:12.839071   15574 ssh_runner.go:195] Run: systemctl --version
I1219 02:37:12.841130   15574 main.go:144] libmachine: domain functional-991175 has defined MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:12.841548   15574 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:23:e5:1e", ip: ""} in network mk-functional-991175: {Iface:virbr1 ExpiryTime:2025-12-19 03:34:15 +0000 UTC Type:0 Mac:52:54:00:23:e5:1e Iaid: IPaddr:192.168.39.176 Prefix:24 Hostname:functional-991175 Clientid:01:52:54:00:23:e5:1e}
I1219 02:37:12.841569   15574 main.go:144] libmachine: domain functional-991175 has defined IP address 192.168.39.176 and MAC address 52:54:00:23:e5:1e in network mk-functional-991175
I1219 02:37:12.841706   15574 sshutil.go:53] new ssh client: &{IP:192.168.39.176 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-991175/id_rsa Username:docker}
I1219 02:37:12.929310   15574 build_images.go:162] Building image from path: /tmp/build.1861444303.tar
I1219 02:37:12.929385   15574 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:37:12.962106   15574 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1861444303.tar
I1219 02:37:12.977377   15574 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1861444303.tar: stat -c "%s %y" /var/lib/minikube/build/build.1861444303.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1861444303.tar': No such file or directory
I1219 02:37:12.977418   15574 ssh_runner.go:362] scp /tmp/build.1861444303.tar --> /var/lib/minikube/build/build.1861444303.tar (3072 bytes)
I1219 02:37:13.031730   15574 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1861444303
I1219 02:37:13.061368   15574 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1861444303 -xf /var/lib/minikube/build/build.1861444303.tar
I1219 02:37:13.083117   15574 containerd.go:394] Building image: /var/lib/minikube/build/build.1861444303
I1219 02:37:13.083192   15574 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1861444303 --local dockerfile=/var/lib/minikube/build/build.1861444303 --output type=image,name=localhost/my-image:functional-991175
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 3.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.1s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.2s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 exporting manifest sha256:c1c744902496898dfc44656586e2aae712ddde14276f58aa66309fbb36c556a7
#8 exporting manifest sha256:c1c744902496898dfc44656586e2aae712ddde14276f58aa66309fbb36c556a7 0.0s done
#8 exporting config sha256:bf713e5ea800d26426f509d8993fd2aac9de0459b21e3679f5ea18b5fc0f51ca 0.0s done
#8 naming to localhost/my-image:functional-991175 0.0s done
#8 DONE 0.2s
I1219 02:37:18.412247   15574 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1861444303 --local dockerfile=/var/lib/minikube/build/build.1861444303 --output type=image,name=localhost/my-image:functional-991175: (5.329029572s)
I1219 02:37:18.412310   15574 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1861444303
I1219 02:37:18.433249   15574 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1861444303.tar
I1219 02:37:18.446691   15574 build_images.go:218] Built localhost/my-image:functional-991175 from /tmp/build.1861444303.tar
I1219 02:37:18.446735   15574 build_images.go:134] succeeded building to: functional-991175
I1219 02:37:18.446740   15574 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (6.01s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.411896718s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-991175
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.43s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "236.541753ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "67.256286ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "252.137744ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "57.898419ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image load --daemon kicbase/echo-server:functional-991175 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 image load --daemon kicbase/echo-server:functional-991175 --alsologtostderr: (1.13802391s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image load --daemon kicbase/echo-server:functional-991175 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.221456731s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-991175
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image load --daemon kicbase/echo-server:functional-991175 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-linux-amd64 -p functional-991175 image load --daemon kicbase/echo-server:functional-991175 --alsologtostderr: (1.241467347s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (2.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image save kicbase/echo-server:functional-991175 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image rm kicbase/echo-server:functional-991175 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-991175
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 image save --daemon kicbase/echo-server:functional-991175 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-991175
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (18.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-991175 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-991175 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-f6w8n" [2dd3a376-a910-4065-a877-d5dd5989104c] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-f6w8n" [2dd3a376-a910-4065-a877-d5dd5989104c] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 18.008012281s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (18.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (10.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdany-port9742010/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766111824720710857" to /tmp/TestFunctionalparallelMountCmdany-port9742010/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766111824720710857" to /tmp/TestFunctionalparallelMountCmdany-port9742010/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766111824720710857" to /tmp/TestFunctionalparallelMountCmdany-port9742010/001/test-1766111824720710857
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (165.875325ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:37:04.886919    8978 retry.go:31] will retry after 438.580475ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:37 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:37 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:37 test-1766111824720710857
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh cat /mount-9p/test-1766111824720710857
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-991175 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [2342ec87-98cc-47ed-9a07-a529f4e36993] Pending
helpers_test.go:353: "busybox-mount" [2342ec87-98cc-47ed-9a07-a529f4e36993] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [2342ec87-98cc-47ed-9a07-a529f4e36993] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [2342ec87-98cc-47ed-9a07-a529f4e36993] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 8.005231442s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-991175 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdany-port9742010/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (10.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 service list -o json
functional_test.go:1504: Took "456.640875ms" to run "out/minikube-linux-amd64 -p functional-991175 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.176:31884
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.176:31884
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdspecific-port1680034923/001:/mount-9p --alsologtostderr -v=1 --port 38769]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (325.342672ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:37:15.185442    8978 retry.go:31] will retry after 725.930409ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdspecific-port1680034923/001:/mount-9p --alsologtostderr -v=1 --port 38769] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh "sudo umount -f /mount-9p": exit status 1 (189.36024ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-991175 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdspecific-port1680034923/001:/mount-9p --alsologtostderr -v=1 --port 38769] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.82s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (0.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T" /mount1: exit status 1 (176.280949ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:37:16.854402    8978 retry.go:31] will retry after 270.908441ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-991175 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-991175 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-991175 /tmp/TestFunctionalparallelMountCmdVerifyCleanup644847876/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (0.97s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-991175
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-991175
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-991175
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/22230-5003/.minikube/files/etc/test/nested/copy/8978/hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (77.97s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-509202 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-509202 --memory=4096 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (1m17.971381177s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/StartWithProxy (77.97s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (47.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart
I1219 02:38:49.192653    8978 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-509202 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-509202 --alsologtostderr -v=8: (47.834955829s)
functional_test.go:678: soft start took 47.835328828s for "functional-509202" cluster.
I1219 02:39:37.027912    8978 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/SoftStart (47.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-509202 get po -A
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/KubectlGetPods (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_remote (2.75s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialCacheC547734680/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cache add minikube-local-cache-test:functional-509202
functional_test.go:1104: (dbg) Done: out/minikube-linux-amd64 -p functional-509202 cache add minikube-local-cache-test:functional-509202: (2.105914318s)
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cache delete minikube-local-cache-test:functional-509202
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-509202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/add_local (2.39s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (174.741294ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/cache_reload (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 kubectl -- --context functional-509202 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-509202 get pods
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (41.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-509202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1219 02:39:44.847611    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:40:12.533452    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-509202 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (41.881755822s)
functional_test.go:776: restart took 41.881889328s for "functional-509202" cluster.
I1219 02:40:26.192271    8978 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ExtraConfig (41.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-509202 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-509202 logs: (1.223484178s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.22s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi182188430/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-509202 logs --file /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1serialLogsFi182188430/001/logs.txt: (1.221855089s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/LogsFileCmd (1.22s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-509202 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-509202
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-509202: exit status 115 (226.607587ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬─────────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │             URL             │
	├───────────┼─────────────┼─────────────┼─────────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.39.198:32397 │
	└───────────┴─────────────┴─────────────┴─────────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-509202 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/serial/InvalidService (4.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 config get cpus: exit status 14 (72.678657ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 config get cpus: exit status 14 (59.210318ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-509202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-509202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (106.981576ms)

                                                
                                                
-- stdout --
	* [functional-509202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:40:33.576786   17447 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:40:33.576887   17447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:40:33.576898   17447 out.go:374] Setting ErrFile to fd 2...
	I1219 02:40:33.576904   17447 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:40:33.577108   17447 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:40:33.577529   17447 out.go:368] Setting JSON to false
	I1219 02:40:33.578446   17447 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1373,"bootTime":1766110661,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:40:33.578504   17447 start.go:143] virtualization: kvm guest
	I1219 02:40:33.580383   17447 out.go:179] * [functional-509202] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 02:40:33.581473   17447 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:40:33.581464   17447 notify.go:221] Checking for updates...
	I1219 02:40:33.583708   17447 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:40:33.585768   17447 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:40:33.587373   17447 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:40:33.588591   17447 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:40:33.589687   17447 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:40:33.591350   17447 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 02:40:33.592003   17447 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:40:33.622826   17447 out.go:179] * Using the kvm2 driver based on existing profile
	I1219 02:40:33.623763   17447 start.go:309] selected driver: kvm2
	I1219 02:40:33.623774   17447 start.go:928] validating driver "kvm2" against &{Name:functional-509202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-509202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:40:33.623879   17447 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:40:33.625604   17447 out.go:203] 
	W1219 02:40:33.626646   17447 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1219 02:40:33.627555   17447 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-509202 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DryRun (0.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-509202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-509202 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: exit status 23 (123.337673ms)

                                                
                                                
-- stdout --
	* [functional-509202] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:40:33.461733   17413 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:40:33.461876   17413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:40:33.461883   17413 out.go:374] Setting ErrFile to fd 2...
	I1219 02:40:33.461890   17413 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:40:33.462361   17413 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:40:33.462904   17413 out.go:368] Setting JSON to false
	I1219 02:40:33.464141   17413 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":1372,"bootTime":1766110661,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 02:40:33.464238   17413 start.go:143] virtualization: kvm guest
	I1219 02:40:33.468138   17413 out.go:179] * [functional-509202] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1219 02:40:33.469802   17413 notify.go:221] Checking for updates...
	I1219 02:40:33.469884   17413 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 02:40:33.471082   17413 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 02:40:33.472370   17413 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 02:40:33.473814   17413 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 02:40:33.474952   17413 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 02:40:33.476203   17413 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 02:40:33.477918   17413 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
	I1219 02:40:33.478646   17413 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 02:40:33.512865   17413 out.go:179] * Utilisation du pilote kvm2 basé sur le profil existant
	I1219 02:40:33.513916   17413 start.go:309] selected driver: kvm2
	I1219 02:40:33.513932   17413 start.go:928] validating driver "kvm2" against &{Name:functional-509202 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/22186/minikube-v1.37.0-1765965980-22186-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765966054-22186@sha256:1c173489767e6632c410d2554f1a2272f032a423dd528157e201daadfe3c43f0 Memory:4096 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.35.0-rc.1 ClusterName:functional-509202 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.198 Port:8441 KubernetesVersion:v1.35.0-rc.1 ContainerRuntime:containerd ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertEx
piration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1219 02:40:33.514068   17413 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 02:40:33.516194   17413 out.go:203] 
	W1219 02:40:33.517304   17413 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1219 02:40:33.518373   17413 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/InternationalLanguage (0.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.79s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 status -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/StatusCmd (0.79s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (15.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-509202 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-509202 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-9f67c86d4-7j4b8" [4ebabee6-861b-4953-8ee9-d92d121c6246] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-9f67c86d4-7j4b8" [4ebabee6-861b-4953-8ee9-d92d121c6246] Running
functional_test.go:1645: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 15.244726101s
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 service hello-node-connect --url
functional_test.go:1660: found endpoint for hello-node-connect: http://192.168.39.198:30662
functional_test.go:1680: http://192.168.39.198:30662: success! body:
Request served by hello-node-connect-9f67c86d4-7j4b8

                                                
                                                
HTTP/1.1 GET /

                                                
                                                
Host: 192.168.39.198:30662
Accept-Encoding: gzip
User-Agent: Go-http-client/1.1
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmdConnect (15.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/AddonsCmd (0.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (54.27s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [179f5106-59fa-4a4d-93bc-bbd707ec6f17] Running
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003563386s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-509202 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-509202 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-509202 get pvc myclaim -o=json
I1219 02:40:40.609285    8978 retry.go:31] will retry after 2.104713085s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:4b760375-2a80-4969-a12b-40191d549668 ResourceVersion:866 Generation:0 CreationTimestamp:2025-12-19 02:40:40 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0019bcde0 VolumeMode:0xc0019bcdf0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-509202 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-509202 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:40:42.893049    8978 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [2a3a1137-421e-412f-a243-e03747220300] Pending
helpers_test.go:353: "sp-pod" [2a3a1137-421e-412f-a243-e03747220300] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [2a3a1137-421e-412f-a243-e03747220300] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 37.009430899s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-509202 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-509202 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-509202 delete -f testdata/storage-provisioner/pod.yaml: (2.016942047s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-509202 apply -f testdata/storage-provisioner/pod.yaml
I1219 02:41:22.459084    8978 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [0a3a4012-e874-478b-8e10-5b71e46f9a8d] Pending
helpers_test.go:353: "sp-pod" [0a3a4012-e874-478b-8e10-5b71e46f9a8d] Running
functional_test_pvc_test.go:140: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.007027172s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-509202 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PersistentVolumeClaim (54.27s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/SSHCmd (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.12s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh -n functional-509202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cp functional-509202:/home/docker/cp-test.txt /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelCpCm3653567642/001/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh -n functional-509202 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh -n functional-509202 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CpCmd (1.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (48.56s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-509202 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-7d7b65bc95-qhht8" [9b8f48cb-e51c-4347-b0bd-31ec403f0a8c] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-7d7b65bc95-qhht8" [9b8f48cb-e51c-4347-b0bd-31ec403f0a8c] Running
functional_test.go:1804: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL: app=mysql healthy within 39.005417665s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;": exit status 1 (144.610391ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:41:38.459048    8978 retry.go:31] will retry after 1.2463544s: exit status 1
E1219 02:41:39.042719    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.048034    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.058327    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.078708    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.119057    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.199469    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.359955    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:39.680589    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;": exit status 1 (231.560768ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:41:39.937558    8978 retry.go:31] will retry after 2.206442875s: exit status 1
E1219 02:41:40.320996    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:41.601542    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;": exit status 1 (204.552147ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:41:42.348888    8978 retry.go:31] will retry after 1.158768034s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;": exit status 1 (123.84111ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1219 02:41:43.632229    8978 retry.go:31] will retry after 3.930127768s: exit status 1
E1219 02:41:44.161721    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-509202 exec mysql-7d7b65bc95-qhht8 -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MySQL (48.56s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.17s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/8978/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo cat /etc/test/nested/copy/8978/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/FileSync (0.17s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/8978.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo cat /etc/ssl/certs/8978.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/8978.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo cat /usr/share/ca-certificates/8978.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/89782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo cat /etc/ssl/certs/89782.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/89782.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo cat /usr/share/ca-certificates/89782.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/CertSync (1.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-509202 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh "sudo systemctl is-active docker": exit status 1 (176.025775ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh "sudo systemctl is-active crio": exit status 1 (179.507839ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/NonActiveRuntimeDisabled (0.36s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.62s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/License (0.62s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 version -o=json --components
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/Version/components (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (9.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port
functional_test_mount_test.go:74: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1028991096/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:108: wrote "test-1766112032985851847" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1028991096/001/created-by-test
functional_test_mount_test.go:108: wrote "test-1766112032985851847" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1028991096/001/created-by-test-removed-by-pod
functional_test_mount_test.go:108: wrote "test-1766112032985851847" to /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1028991096/001/test-1766112032985851847
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:116: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (186.834073ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:40:33.173112    8978 retry.go:31] will retry after 648.533286ms: exit status 1
functional_test_mount_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:130: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:134: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 19 02:40 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 19 02:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 19 02:40 test-1766112032985851847
functional_test_mount_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh cat /mount-9p/test-1766112032985851847
functional_test_mount_test.go:149: (dbg) Run:  kubectl --context functional-509202 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:353: "busybox-mount" [41fb2b43-9d78-4f0c-a7d5-8215224907db] Pending
helpers_test.go:353: "busybox-mount" [41fb2b43-9d78-4f0c-a7d5-8215224907db] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:353: "busybox-mount" [41fb2b43-9d78-4f0c-a7d5-8215224907db] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "busybox-mount" [41fb2b43-9d78-4f0c-a7d5-8215224907db] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:154: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.035822693s
functional_test_mount_test.go:170: (dbg) Run:  kubectl --context functional-509202 logs busybox-mount
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:95: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1028991096/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/any-port (9.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "297.737171ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "74.406202ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_list (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "279.452052ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "63.537665ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ProfileCmd/profile_json_output (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port
functional_test_mount_test.go:219: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2858679738/001:/mount-9p --alsologtostderr -v=1 --port 36803]
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:249: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (178.641853ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:40:42.471090    8978 retry.go:31] will retry after 503.081245ms: exit status 1
functional_test_mount_test.go:249: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:263: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh -- ls -la /mount-9p
functional_test_mount_test.go:267: guest mount directory contents
total 0
functional_test_mount_test.go:269: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2858679738/001:/mount-9p --alsologtostderr -v=1 --port 36803] ...
functional_test_mount_test.go:270: reading mount text
functional_test_mount_test.go:284: done reading mount text
functional_test_mount_test.go:236: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:236: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh "sudo umount -f /mount-9p": exit status 1 (154.329538ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:238: "out/minikube-linux-amd64 -p functional-509202 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:240: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun2858679738/001:/mount-9p --alsologtostderr -v=1 --port 36803] ...
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/specific-port (1.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1641780952/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1641780952/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:304: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1641780952/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T" /mount1: exit status 1 (170.46406ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1219 02:40:43.876664    8978 retry.go:31] will retry after 662.747254ms: exit status 1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T" /mount1
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T" /mount2
functional_test_mount_test.go:331: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh "findmnt -T" /mount3
functional_test_mount_test.go:376: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-509202 --kill=true
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1641780952/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1641780952/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:319: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-509202 /tmp/TestFunctionalNewestKubernetesVersionv1.35.0-rc.1parallelMoun1641780952/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/MountCmd/VerifyCleanup (1.35s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (38.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-509202 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-509202 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-5758569b79-6qsvn" [a22ce170-646c-4cc4-8519-e8e61424112b] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-5758569b79-6qsvn" [a22ce170-646c-4cc4-8519-e8e61424112b] Running
functional_test.go:1460: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 38.003960788s
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/DeployApp (38.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-509202 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-rc.1
registry.k8s.io/kube-proxy:v1.35.0-rc.1
registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
registry.k8s.io/kube-apiserver:v1.35.0-rc.1
registry.k8s.io/etcd:3.6.6-0
registry.k8s.io/coredns/coredns:v1.13.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/minikube-local-cache-test:functional-509202
docker.io/library/kong:3.9
docker.io/kubernetesui/dashboard-web:1.7.0
docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2
docker.io/kubernetesui/dashboard-auth:1.4.0
docker.io/kubernetesui/dashboard-api:1.14.0
docker.io/kindest/kindnetd:v20250512-df8de77b
docker.io/kicbase/echo-server:functional-509202
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-509202 image ls --format short --alsologtostderr:
I1219 02:41:13.495708   18550 out.go:360] Setting OutFile to fd 1 ...
I1219 02:41:13.495817   18550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:13.495826   18550 out.go:374] Setting ErrFile to fd 2...
I1219 02:41:13.495830   18550 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:13.496034   18550 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:41:13.496605   18550 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:13.496708   18550 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:13.499152   18550 ssh_runner.go:195] Run: systemctl --version
I1219 02:41:13.501850   18550 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:13.502340   18550 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:41:13.502368   18550 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:13.502557   18550 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-509202/id_rsa Username:docker}
I1219 02:41:13.594982   18550 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListShort (0.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-509202 image ls --format table --alsologtostderr:
┌──────────────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                      IMAGE                       │        TAG         │   IMAGE ID    │  SIZE  │
├──────────────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/busybox                      │ 1.28.4-glibc       │ sha256:56cc51 │ 2.4MB  │
│ gcr.io/k8s-minikube/storage-provisioner          │ v5                 │ sha256:6e38f4 │ 9.06MB │
│ registry.k8s.io/etcd                             │ 3.6.6-0            │ sha256:0a108f │ 23.6MB │
│ registry.k8s.io/pause                            │ 3.1                │ sha256:da86e6 │ 315kB  │
│ registry.k8s.io/pause                            │ 3.3                │ sha256:0184c1 │ 298kB  │
│ docker.io/kubernetesui/dashboard-api             │ 1.14.0             │ sha256:a0607a │ 16.5MB │
│ docker.io/kubernetesui/dashboard-auth            │ 1.4.0              │ sha256:dd5437 │ 14.5MB │
│ docker.io/library/kong                           │ 3.9                │ sha256:3a9759 │ 120MB  │
│ docker.io/library/minikube-local-cache-test      │ functional-509202  │ sha256:d580b3 │ 991B   │
│ public.ecr.aws/nginx/nginx                       │ alpine             │ sha256:04da2b │ 23MB   │
│ registry.k8s.io/kube-controller-manager          │ v1.35.0-rc.1       │ sha256:5032a5 │ 23.1MB │
│ registry.k8s.io/kube-scheduler                   │ v1.35.0-rc.1       │ sha256:73f80c │ 17.2MB │
│ registry.k8s.io/pause                            │ latest             │ sha256:350b16 │ 72.3kB │
│ docker.io/kicbase/echo-server                    │ functional-509202  │ sha256:9056ab │ 2.37MB │
│ docker.io/kindest/kindnetd                       │ v20250512-df8de77b │ sha256:409467 │ 44.4MB │
│ docker.io/kubernetesui/dashboard-web             │ 1.7.0              │ sha256:59f642 │ 62.5MB │
│ registry.k8s.io/coredns/coredns                  │ v1.13.1            │ sha256:aa5e3e │ 23.6MB │
│ registry.k8s.io/kube-apiserver                   │ v1.35.0-rc.1       │ sha256:588654 │ 27.7MB │
│ registry.k8s.io/kube-proxy                       │ v1.35.0-rc.1       │ sha256:af0321 │ 25.8MB │
│ registry.k8s.io/pause                            │ 3.10.1             │ sha256:cd073f │ 320kB  │
│ docker.io/kubernetesui/dashboard-metrics-scraper │ 1.2.2              │ sha256:d9cbc9 │ 13MB   │
└──────────────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-509202 image ls --format table --alsologtostderr:
I1219 02:41:13.882783   18572 out.go:360] Setting OutFile to fd 1 ...
I1219 02:41:13.882986   18572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:13.882996   18572 out.go:374] Setting ErrFile to fd 2...
I1219 02:41:13.883000   18572 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:13.883178   18572 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:41:13.883698   18572 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:13.883788   18572 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:13.885715   18572 ssh_runner.go:195] Run: systemctl --version
I1219 02:41:13.887921   18572 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:13.888370   18572 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:41:13.888393   18572 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:13.888525   18572 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-509202/id_rsa Username:docker}
I1219 02:41:13.974659   18572 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListTable (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-509202 image ls --format json --alsologtostderr:
[{"id":"sha256:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce","repoDigests":["registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee"],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-rc.1"],"size":"27686536"},{"id":"sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-509202"],"size":"2372971"},{"id":"sha256:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1","repoDigests":["docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff"],"repoTags":["docker.io/kubernetesui/dashboard-auth:1.4.0"],"size":"14450164"},{"id":"sha256:d580b3ba11cebc73715205c06458fe8a597a52fffc55ff54ee152c6357e600e0","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-509202"],"size":"991"},{"id":"sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","rep
oDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"2395207"},{"id":"sha256:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-rc.1"],"size":"23134242"},{"id":"sha256:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3"],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-rc.1"],"size":"17237597"},{"id":"sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"315399"},{"id":"sha256:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b","repoDigests":["docker.io
/kubernetesui/dashboard-api@sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff"],"repoTags":["docker.io/kubernetesui/dashboard-api:1.14.0"],"size":"16498766"},{"id":"sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06","repoDigests":["docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d"],"repoTags":["docker.io/kubernetesui/dashboard-web:1.7.0"],"size":"62497108"},{"id":"sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"320448"},{"id":"sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"44375501"},{"id":"sha256:3a975970da2f5f3b
909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480","repoDigests":["docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29"],"repoTags":["docker.io/library/kong:3.9"],"size":"120420500"},{"id":"sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5","repoDigests":["public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76"],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"22996569"},{"id":"sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2","repoDigests":["registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890"],"repoTags":["registry.k8s.io/etcd:3.6.6-0"],"size":"23641797"},{"id":"sha256:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a","repoDigests":["registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5"],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-rc.1"],"size":"25789553"},{"id":"sh
a256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"72306"},{"id":"sha256:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167","repoDigests":["docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775"],"repoTags":["docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2"],"size":"12969394"},{"id":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"9058936"},{"id":"sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6"],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"23553139"},{"id":
"sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"297686"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-509202 image ls --format json --alsologtostderr:
I1219 02:41:13.708443   18561 out.go:360] Setting OutFile to fd 1 ...
I1219 02:41:13.708667   18561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:13.708676   18561 out.go:374] Setting ErrFile to fd 2...
I1219 02:41:13.708680   18561 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:13.708835   18561 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:41:13.709368   18561 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:13.709459   18561 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:13.711487   18561 ssh_runner.go:195] Run: systemctl --version
I1219 02:41:13.713658   18561 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:13.714068   18561 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:41:13.714092   18561 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:13.714229   18561 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-509202/id_rsa Username:docker}
I1219 02:41:13.796071   18561 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-509202 image ls --format yaml --alsologtostderr:
- id: sha256:0a108f7189562e99793bdecab61fdf1a7c9d913af3385de9da17fb9d6ff430e2
repoDigests:
- registry.k8s.io/etcd@sha256:60a30b5d81b2217555e2cfb9537f655b7ba97220b99c39ee2e162a7127225890
repoTags:
- registry.k8s.io/etcd:3.6.6-0
size: "23641797"
- id: sha256:350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "72306"
- id: sha256:dd54374d0ab14ab65dd3d5d975f97d5b4aff7b221db479874a4429225b9b22b1
repoDigests:
- docker.io/kubernetesui/dashboard-auth@sha256:53e9917898bf98ff2de91f7f9bdedd3545780eb3ac72158889ae031136e9eeff
repoTags:
- docker.io/kubernetesui/dashboard-auth:1.4.0
size: "14450164"
- id: sha256:5032a56602e1b9bd8856699701b6148aa1b9901d05b61f893df3b57f84aca614
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:57ab0f75f58d99f4be7bff7bdda015fcbf1b7c20e58ba2722c8c39f751dc8c98
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-rc.1
size: "23134242"
- id: sha256:da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "315399"
- id: sha256:0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "297686"
- id: sha256:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-509202
size: "2372971"
- id: sha256:409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "44375501"
- id: sha256:aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9b9128672209474da07c91439bf15ed704ae05ad918dd6454e5b6ae14e35fee6
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "23553139"
- id: sha256:73f80cdc073daa4d501207f9e6dec1fa9eea5f27e8d347b8a0c4bad8811eecdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:8155e3db27c7081abfc8eb5da70820cfeaf0bba7449e45360e8220e670f417d3
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-rc.1
size: "17237597"
- id: sha256:cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
repoTags:
- registry.k8s.io/pause:3.10.1
size: "320448"
- id: sha256:d9cbc9f4053ca11fa6edb814b512ab807a89346934700a3a425fc1e24a3f2167
repoDigests:
- docker.io/kubernetesui/dashboard-metrics-scraper@sha256:5154b68252bd601cf85092b6413cb9db224af1ef89cb53009d2070dfccd30775
repoTags:
- docker.io/kubernetesui/dashboard-metrics-scraper:1.2.2
size: "12969394"
- id: sha256:3a975970da2f5f3b909dec92b1a5ddc5e9299baee1442fb1a6986a8a120d5480
repoDigests:
- docker.io/library/kong@sha256:4379444ecfd82794b27de38a74ba540e8571683dfdfce74c8ecb4018f308fb29
repoTags:
- docker.io/library/kong:3.9
size: "120420500"
- id: sha256:d580b3ba11cebc73715205c06458fe8a597a52fffc55ff54ee152c6357e600e0
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-509202
size: "991"
- id: sha256:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "2395207"
- id: sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "9058936"
- id: sha256:04da2b0513cd78d8d29d60575cef80813c5496c15a801921e47efdf0feba39e5
repoDigests:
- public.ecr.aws/nginx/nginx@sha256:a411c634df4374901a4a9370626801998f159652f627b1cdfbbbe012adcd6c76
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "22996569"
- id: sha256:58865405a13bccac1d74bc3f446dddd22e6ef0d7ee8b52363c86dd31838976ce
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:58367b5c0428495c0c12411fa7a018f5d40fe57307b85d8935b1ed35706ff7ee
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-rc.1
size: "27686536"
- id: sha256:af0321f3a4f388cfb978464739c323ebf891a7b0b50cdfd7179e92f141dad42a
repoDigests:
- registry.k8s.io/kube-proxy@sha256:bdd1fa8b53558a2e1967379a36b085c93faf15581e5fa9f212baf679d89c5bb5
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-rc.1
size: "25789553"
- id: sha256:a0607af4fcd8ae78708c5bd51f34ccf8442967b8e41f56f008cc2884690f2f3b
repoDigests:
- docker.io/kubernetesui/dashboard-api@sha256:96a702cfd3399d9eba23b3d37b09f798a4f51fcd8c8dfa8552c7829ade9c4aff
repoTags:
- docker.io/kubernetesui/dashboard-api:1.14.0
size: "16498766"
- id: sha256:59f642f485d26d479d2dedc7c6139f5ce41939fa22c1152314fefdf3c463aa06
repoDigests:
- docker.io/kubernetesui/dashboard-web@sha256:cc7c31bd2d8470e3590dcb20fe980769b43054b31a5c5c0da606e9add898d85d
repoTags:
- docker.io/kubernetesui/dashboard-web:1.7.0
size: "62497108"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-509202 image ls --format yaml --alsologtostderr:
I1219 02:41:14.069765   18583 out.go:360] Setting OutFile to fd 1 ...
I1219 02:41:14.070038   18583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:14.070048   18583 out.go:374] Setting ErrFile to fd 2...
I1219 02:41:14.070055   18583 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:14.070333   18583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:41:14.070948   18583 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:14.071080   18583 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:14.073250   18583 ssh_runner.go:195] Run: systemctl --version
I1219 02:41:14.075369   18583 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:14.075718   18583 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:41:14.075747   18583 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:14.075874   18583 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-509202/id_rsa Username:docker}
I1219 02:41:14.157941   18583 ssh_runner.go:195] Run: sudo crictl --timeout=10s images --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageListYaml (0.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (7.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-509202 ssh pgrep buildkitd: exit status 1 (154.455679ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image build -t localhost/my-image:functional-509202 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-509202 image build -t localhost/my-image:functional-509202 testdata/build --alsologtostderr: (6.686080462s)
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-509202 image build -t localhost/my-image:functional-509202 testdata/build --alsologtostderr:
I1219 02:41:14.414140   18605 out.go:360] Setting OutFile to fd 1 ...
I1219 02:41:14.414263   18605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:14.414273   18605 out.go:374] Setting ErrFile to fd 2...
I1219 02:41:14.414280   18605 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1219 02:41:14.414521   18605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
I1219 02:41:14.415168   18605 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:14.416096   18605 config.go:182] Loaded profile config "functional-509202": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.35.0-rc.1
I1219 02:41:14.418892   18605 ssh_runner.go:195] Run: systemctl --version
I1219 02:41:14.420866   18605 main.go:144] libmachine: domain functional-509202 has defined MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:14.421201   18605 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:28:a8:92", ip: ""} in network mk-functional-509202: {Iface:virbr1 ExpiryTime:2025-12-19 03:37:46 +0000 UTC Type:0 Mac:52:54:00:28:a8:92 Iaid: IPaddr:192.168.39.198 Prefix:24 Hostname:functional-509202 Clientid:01:52:54:00:28:a8:92}
I1219 02:41:14.421227   18605 main.go:144] libmachine: domain functional-509202 has defined IP address 192.168.39.198 and MAC address 52:54:00:28:a8:92 in network mk-functional-509202
I1219 02:41:14.421346   18605 sshutil.go:53] new ssh client: &{IP:192.168.39.198 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/functional-509202/id_rsa Username:docker}
I1219 02:41:14.508166   18605 build_images.go:162] Building image from path: /tmp/build.1781534526.tar
I1219 02:41:14.508244   18605 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1219 02:41:14.521921   18605 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1781534526.tar
I1219 02:41:14.527264   18605 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1781534526.tar: stat -c "%s %y" /var/lib/minikube/build/build.1781534526.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1781534526.tar': No such file or directory
I1219 02:41:14.527309   18605 ssh_runner.go:362] scp /tmp/build.1781534526.tar --> /var/lib/minikube/build/build.1781534526.tar (3072 bytes)
I1219 02:41:14.559642   18605 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1781534526
I1219 02:41:14.572337   18605 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1781534526 -xf /var/lib/minikube/build/build.1781534526.tar
I1219 02:41:14.583824   18605 containerd.go:394] Building image: /var/lib/minikube/build/build.1781534526
I1219 02:41:14.583920   18605 ssh_runner.go:195] Run: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1781534526 --local dockerfile=/var/lib/minikube/build/build.1781534526 --output type=image,name=localhost/my-image:functional-509202
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.0s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 2.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.0s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.0s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.0s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 1.0s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 1.3s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 exporting layers
#8 exporting layers 0.2s done
#8 exporting manifest sha256:ad7104af0263e8610a1a74286e4963a5fbde0850fbaaaeba37c2708e20a74084
#8 exporting manifest sha256:ad7104af0263e8610a1a74286e4963a5fbde0850fbaaaeba37c2708e20a74084 0.0s done
#8 exporting config sha256:ea9ac492f9329b6848982db0ecba16d6ec3a658ecfea61826ca86c11ccf30e31 0.0s done
#8 naming to localhost/my-image:functional-509202 done
#8 DONE 0.3s
I1219 02:41:20.996480   18605 ssh_runner.go:235] Completed: sudo buildctl build --frontend dockerfile.v0 --local context=/var/lib/minikube/build/build.1781534526 --local dockerfile=/var/lib/minikube/build/build.1781534526 --output type=image,name=localhost/my-image:functional-509202: (6.412515403s)
I1219 02:41:20.996550   18605 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1781534526
I1219 02:41:21.022030   18605 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1781534526.tar
I1219 02:41:21.038484   18605 build_images.go:218] Built localhost/my-image:functional-509202 from /tmp/build.1781534526.tar
I1219 02:41:21.038528   18605 build_images.go:134] succeeded building to: functional-509202
I1219 02:41:21.038535   18605 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageBuild (7.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (1.15s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.126552156s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-509202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/Setup (1.15s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image load --daemon kicbase/echo-server:functional-509202 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadDaemon (1.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image load --daemon kicbase/echo-server:functional-509202 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageReloadDaemon (1.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:250: (dbg) Done: docker pull kicbase/echo-server:latest: (1.16346633s)
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-509202
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image load --daemon kicbase/echo-server:functional-509202 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageTagAndLoadDaemon (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image save kicbase/echo-server:functional-509202 /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveToFile (0.33s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image rm kicbase/echo-server:functional-509202 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageRemove (0.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image load /home/jenkins/workspace/KVM_Linux_containerd_integration/echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageLoadFromFile (0.59s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-509202
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 image save --daemon kicbase/echo-server:functional-509202 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-509202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ImageCommands/ImageSaveDaemon (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_changes (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_minikube_cluster (0.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/UpdateContextCmd/no_clusters (0.08s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (2.42s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-509202 service list: (2.416564306s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/List (2.42s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (2.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-509202 service list -o json: (2.43132134s)
functional_test.go:1504: Took "2.431430391s" to run "out/minikube-linux-amd64 -p functional-509202 service list -o json"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/JSONOutput (2.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 service --namespace=default --https --url hello-node
functional_test.go:1532: found endpoint: https://192.168.39.198:32548
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 service hello-node --url --format={{.IP}}
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/Format (0.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-509202 service hello-node --url
functional_test.go:1575: found endpoint for hello-node: http://192.168.39.198:32548
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-509202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-509202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-509202
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (206.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E1219 02:41:49.282103    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:41:59.522543    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:42:20.002938    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:43:00.963893    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:44:22.885393    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:44:44.847527    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (3m25.613826616s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (206.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 kubectl -- rollout status deployment/busybox: (6.647544795s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-2pm92 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-8f29t -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-cnr2m -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-2pm92 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-8f29t -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-cnr2m -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-2pm92 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-8f29t -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-cnr2m -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-2pm92 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-2pm92 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-8f29t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-8f29t -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-cnr2m -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 kubectl -- exec busybox-7b57f96db7-cnr2m -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (48.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 node add --alsologtostderr -v 5
E1219 02:45:34.379669    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:34.385085    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:34.395420    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:34.415713    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:34.456126    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:34.536561    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:34.697034    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:35.017562    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:35.658554    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:36.939173    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:39.500399    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:44.620980    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:45:54.861343    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 node add --alsologtostderr -v 5: (48.270465452s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (48.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-949426 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (10.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --output json --alsologtostderr -v 5
E1219 02:46:15.342019    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp testdata/cp-test.txt ha-949426:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile441684106/001/cp-test_ha-949426.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426:/home/docker/cp-test.txt ha-949426-m02:/home/docker/cp-test_ha-949426_ha-949426-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test_ha-949426_ha-949426-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426:/home/docker/cp-test.txt ha-949426-m03:/home/docker/cp-test_ha-949426_ha-949426-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test_ha-949426_ha-949426-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426:/home/docker/cp-test.txt ha-949426-m04:/home/docker/cp-test_ha-949426_ha-949426-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test_ha-949426_ha-949426-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp testdata/cp-test.txt ha-949426-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile441684106/001/cp-test_ha-949426-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m02:/home/docker/cp-test.txt ha-949426:/home/docker/cp-test_ha-949426-m02_ha-949426.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test_ha-949426-m02_ha-949426.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m02:/home/docker/cp-test.txt ha-949426-m03:/home/docker/cp-test_ha-949426-m02_ha-949426-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test_ha-949426-m02_ha-949426-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m02:/home/docker/cp-test.txt ha-949426-m04:/home/docker/cp-test_ha-949426-m02_ha-949426-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test_ha-949426-m02_ha-949426-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp testdata/cp-test.txt ha-949426-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile441684106/001/cp-test_ha-949426-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m03:/home/docker/cp-test.txt ha-949426:/home/docker/cp-test_ha-949426-m03_ha-949426.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test_ha-949426-m03_ha-949426.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m03:/home/docker/cp-test.txt ha-949426-m02:/home/docker/cp-test_ha-949426-m03_ha-949426-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test_ha-949426-m03_ha-949426-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m03:/home/docker/cp-test.txt ha-949426-m04:/home/docker/cp-test_ha-949426-m03_ha-949426-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test_ha-949426-m03_ha-949426-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp testdata/cp-test.txt ha-949426-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile441684106/001/cp-test_ha-949426-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m04:/home/docker/cp-test.txt ha-949426:/home/docker/cp-test_ha-949426-m04_ha-949426.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426 "sudo cat /home/docker/cp-test_ha-949426-m04_ha-949426.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m04:/home/docker/cp-test.txt ha-949426-m02:/home/docker/cp-test_ha-949426-m04_ha-949426-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m02 "sudo cat /home/docker/cp-test_ha-949426-m04_ha-949426-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 cp ha-949426-m04:/home/docker/cp-test.txt ha-949426-m03:/home/docker/cp-test_ha-949426-m04_ha-949426-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 ssh -n ha-949426-m03 "sudo cat /home/docker/cp-test_ha-949426-m04_ha-949426-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (10.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (85.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 node stop m02 --alsologtostderr -v 5
E1219 02:46:39.042002    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:46:56.303080    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:47:06.726307    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 node stop m02 --alsologtostderr -v 5: (1m24.676342773s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5: exit status 7 (480.308952ms)

                                                
                                                
-- stdout --
	ha-949426
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949426-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949426-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-949426-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:47:50.331475   21916 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:47:50.331716   21916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:47:50.331726   21916 out.go:374] Setting ErrFile to fd 2...
	I1219 02:47:50.331730   21916 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:47:50.331968   21916 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:47:50.332189   21916 out.go:368] Setting JSON to false
	I1219 02:47:50.332212   21916 mustload.go:66] Loading cluster: ha-949426
	I1219 02:47:50.332349   21916 notify.go:221] Checking for updates...
	I1219 02:47:50.332614   21916 config.go:182] Loaded profile config "ha-949426": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:47:50.332632   21916 status.go:174] checking status of ha-949426 ...
	I1219 02:47:50.334745   21916 status.go:371] ha-949426 host status = "Running" (err=<nil>)
	I1219 02:47:50.334761   21916 host.go:66] Checking if "ha-949426" exists ...
	I1219 02:47:50.337393   21916 main.go:144] libmachine: domain ha-949426 has defined MAC address 52:54:00:8b:05:65 in network mk-ha-949426
	I1219 02:47:50.337847   21916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:05:65", ip: ""} in network mk-ha-949426: {Iface:virbr1 ExpiryTime:2025-12-19 03:42:03 +0000 UTC Type:0 Mac:52:54:00:8b:05:65 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-949426 Clientid:01:52:54:00:8b:05:65}
	I1219 02:47:50.337876   21916 main.go:144] libmachine: domain ha-949426 has defined IP address 192.168.39.191 and MAC address 52:54:00:8b:05:65 in network mk-ha-949426
	I1219 02:47:50.338052   21916 host.go:66] Checking if "ha-949426" exists ...
	I1219 02:47:50.338279   21916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:47:50.340665   21916 main.go:144] libmachine: domain ha-949426 has defined MAC address 52:54:00:8b:05:65 in network mk-ha-949426
	I1219 02:47:50.341094   21916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:8b:05:65", ip: ""} in network mk-ha-949426: {Iface:virbr1 ExpiryTime:2025-12-19 03:42:03 +0000 UTC Type:0 Mac:52:54:00:8b:05:65 Iaid: IPaddr:192.168.39.191 Prefix:24 Hostname:ha-949426 Clientid:01:52:54:00:8b:05:65}
	I1219 02:47:50.341118   21916 main.go:144] libmachine: domain ha-949426 has defined IP address 192.168.39.191 and MAC address 52:54:00:8b:05:65 in network mk-ha-949426
	I1219 02:47:50.341310   21916 sshutil.go:53] new ssh client: &{IP:192.168.39.191 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/ha-949426/id_rsa Username:docker}
	I1219 02:47:50.430155   21916 ssh_runner.go:195] Run: systemctl --version
	I1219 02:47:50.436895   21916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:47:50.453817   21916 kubeconfig.go:125] found "ha-949426" server: "https://192.168.39.254:8443"
	I1219 02:47:50.453846   21916 api_server.go:166] Checking apiserver status ...
	I1219 02:47:50.453878   21916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:47:50.473087   21916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup
	W1219 02:47:50.487751   21916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1519/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:47:50.487821   21916 ssh_runner.go:195] Run: ls
	I1219 02:47:50.495048   21916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1219 02:47:50.499701   21916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1219 02:47:50.499730   21916 status.go:463] ha-949426 apiserver status = Running (err=<nil>)
	I1219 02:47:50.499757   21916 status.go:176] ha-949426 status: &{Name:ha-949426 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:47:50.499781   21916 status.go:174] checking status of ha-949426-m02 ...
	I1219 02:47:50.501259   21916 status.go:371] ha-949426-m02 host status = "Stopped" (err=<nil>)
	I1219 02:47:50.501277   21916 status.go:384] host is not running, skipping remaining checks
	I1219 02:47:50.501285   21916 status.go:176] ha-949426-m02 status: &{Name:ha-949426-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:47:50.501303   21916 status.go:174] checking status of ha-949426-m03 ...
	I1219 02:47:50.502587   21916 status.go:371] ha-949426-m03 host status = "Running" (err=<nil>)
	I1219 02:47:50.502604   21916 host.go:66] Checking if "ha-949426-m03" exists ...
	I1219 02:47:50.504834   21916 main.go:144] libmachine: domain ha-949426-m03 has defined MAC address 52:54:00:4e:d6:3c in network mk-ha-949426
	I1219 02:47:50.505201   21916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:d6:3c", ip: ""} in network mk-ha-949426: {Iface:virbr1 ExpiryTime:2025-12-19 03:44:07 +0000 UTC Type:0 Mac:52:54:00:4e:d6:3c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-949426-m03 Clientid:01:52:54:00:4e:d6:3c}
	I1219 02:47:50.505223   21916 main.go:144] libmachine: domain ha-949426-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4e:d6:3c in network mk-ha-949426
	I1219 02:47:50.505358   21916 host.go:66] Checking if "ha-949426-m03" exists ...
	I1219 02:47:50.505534   21916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:47:50.507440   21916 main.go:144] libmachine: domain ha-949426-m03 has defined MAC address 52:54:00:4e:d6:3c in network mk-ha-949426
	I1219 02:47:50.507791   21916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:4e:d6:3c", ip: ""} in network mk-ha-949426: {Iface:virbr1 ExpiryTime:2025-12-19 03:44:07 +0000 UTC Type:0 Mac:52:54:00:4e:d6:3c Iaid: IPaddr:192.168.39.92 Prefix:24 Hostname:ha-949426-m03 Clientid:01:52:54:00:4e:d6:3c}
	I1219 02:47:50.507820   21916 main.go:144] libmachine: domain ha-949426-m03 has defined IP address 192.168.39.92 and MAC address 52:54:00:4e:d6:3c in network mk-ha-949426
	I1219 02:47:50.507937   21916 sshutil.go:53] new ssh client: &{IP:192.168.39.92 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/ha-949426-m03/id_rsa Username:docker}
	I1219 02:47:50.589968   21916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:47:50.609004   21916 kubeconfig.go:125] found "ha-949426" server: "https://192.168.39.254:8443"
	I1219 02:47:50.609046   21916 api_server.go:166] Checking apiserver status ...
	I1219 02:47:50.609077   21916 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 02:47:50.630493   21916 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1468/cgroup
	W1219 02:47:50.641748   21916 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1468/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 02:47:50.641806   21916 ssh_runner.go:195] Run: ls
	I1219 02:47:50.646368   21916 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I1219 02:47:50.651232   21916 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I1219 02:47:50.651250   21916 status.go:463] ha-949426-m03 apiserver status = Running (err=<nil>)
	I1219 02:47:50.651257   21916 status.go:176] ha-949426-m03 status: &{Name:ha-949426-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:47:50.651269   21916 status.go:174] checking status of ha-949426-m04 ...
	I1219 02:47:50.652791   21916 status.go:371] ha-949426-m04 host status = "Running" (err=<nil>)
	I1219 02:47:50.652813   21916 host.go:66] Checking if "ha-949426-m04" exists ...
	I1219 02:47:50.655198   21916 main.go:144] libmachine: domain ha-949426-m04 has defined MAC address 52:54:00:70:a9:4a in network mk-ha-949426
	I1219 02:47:50.655607   21916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:a9:4a", ip: ""} in network mk-ha-949426: {Iface:virbr1 ExpiryTime:2025-12-19 03:45:41 +0000 UTC Type:0 Mac:52:54:00:70:a9:4a Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-949426-m04 Clientid:01:52:54:00:70:a9:4a}
	I1219 02:47:50.655626   21916 main.go:144] libmachine: domain ha-949426-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:70:a9:4a in network mk-ha-949426
	I1219 02:47:50.655737   21916 host.go:66] Checking if "ha-949426-m04" exists ...
	I1219 02:47:50.656003   21916 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 02:47:50.658325   21916 main.go:144] libmachine: domain ha-949426-m04 has defined MAC address 52:54:00:70:a9:4a in network mk-ha-949426
	I1219 02:47:50.658815   21916 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:70:a9:4a", ip: ""} in network mk-ha-949426: {Iface:virbr1 ExpiryTime:2025-12-19 03:45:41 +0000 UTC Type:0 Mac:52:54:00:70:a9:4a Iaid: IPaddr:192.168.39.205 Prefix:24 Hostname:ha-949426-m04 Clientid:01:52:54:00:70:a9:4a}
	I1219 02:47:50.658877   21916 main.go:144] libmachine: domain ha-949426-m04 has defined IP address 192.168.39.205 and MAC address 52:54:00:70:a9:4a in network mk-ha-949426
	I1219 02:47:50.659073   21916 sshutil.go:53] new ssh client: &{IP:192.168.39.205 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/ha-949426-m04/id_rsa Username:docker}
	I1219 02:47:50.738539   21916 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 02:47:50.755692   21916 status.go:176] ha-949426-m04 status: &{Name:ha-949426-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (85.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (26.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 node start m02 --alsologtostderr -v 5: (25.494100204s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (26.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E1219 02:48:18.224305    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.72s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 stop --alsologtostderr -v 5
E1219 02:49:44.847417    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:50:34.381890    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:51:02.066259    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:51:07.896334    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:51:39.042651    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 stop --alsologtostderr -v 5: (4m27.167621342s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 start --wait true --alsologtostderr -v 5
E1219 02:54:44.846943    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 start --wait true --alsologtostderr -v 5: (2m2.400737246s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (389.72s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (6.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 node delete m03 --alsologtostderr -v 5: (5.863769935s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (6.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (248.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 stop --alsologtostderr -v 5
E1219 02:55:34.379980    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:56:39.043060    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 02:58:02.087266    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 stop --alsologtostderr -v 5: (4m7.972441637s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5: exit status 7 (59.272212ms)

                                                
                                                
-- stdout --
	ha-949426
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949426-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-949426-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 02:59:03.276064   24926 out.go:360] Setting OutFile to fd 1 ...
	I1219 02:59:03.276319   24926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:59:03.276328   24926 out.go:374] Setting ErrFile to fd 2...
	I1219 02:59:03.276333   24926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 02:59:03.276573   24926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 02:59:03.276786   24926 out.go:368] Setting JSON to false
	I1219 02:59:03.276808   24926 mustload.go:66] Loading cluster: ha-949426
	I1219 02:59:03.276852   24926 notify.go:221] Checking for updates...
	I1219 02:59:03.277202   24926 config.go:182] Loaded profile config "ha-949426": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 02:59:03.277217   24926 status.go:174] checking status of ha-949426 ...
	I1219 02:59:03.279242   24926 status.go:371] ha-949426 host status = "Stopped" (err=<nil>)
	I1219 02:59:03.279255   24926 status.go:384] host is not running, skipping remaining checks
	I1219 02:59:03.279260   24926 status.go:176] ha-949426 status: &{Name:ha-949426 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:59:03.279283   24926 status.go:174] checking status of ha-949426-m02 ...
	I1219 02:59:03.280381   24926 status.go:371] ha-949426-m02 host status = "Stopped" (err=<nil>)
	I1219 02:59:03.280393   24926 status.go:384] host is not running, skipping remaining checks
	I1219 02:59:03.280397   24926 status.go:176] ha-949426-m02 status: &{Name:ha-949426-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 02:59:03.280407   24926 status.go:174] checking status of ha-949426-m04 ...
	I1219 02:59:03.281416   24926 status.go:371] ha-949426-m04 host status = "Stopped" (err=<nil>)
	I1219 02:59:03.281428   24926 status.go:384] host is not running, skipping remaining checks
	I1219 02:59:03.281432   24926 status.go:176] ha-949426-m04 status: &{Name:ha-949426-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (248.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (80.61s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd
E1219 02:59:44.847454    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 start --wait true --alsologtostderr -v 5 --driver=kvm2  --container-runtime=containerd: (1m19.972189073s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (80.61s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (98.62s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 node add --control-plane --alsologtostderr -v 5
E1219 03:00:34.380255    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:01:39.042206    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:01:57.427996    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-949426 node add --control-plane --alsologtostderr -v 5: (1m37.926717233s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-949426 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (98.62s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.66s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.15s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-363257 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-363257 --output=json --user=testUser --memory=3072 --wait=true --driver=kvm2  --container-runtime=containerd: (1m19.149253865s)
--- PASS: TestJSONOutput/start/Command (79.15s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-363257 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.59s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-363257 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.59s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.16s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-363257 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-363257 --output=json --user=testUser: (7.161437302s)
--- PASS: TestJSONOutput/stop/Command (7.16s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-183956 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-183956 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (71.683295ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"c85be623-fecf-4345-99a4-50cbeff4307c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-183956] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"899988c3-9e7c-45de-829c-1ecd7c5e2288","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22230"}}
	{"specversion":"1.0","id":"3b3076cf-938b-43cf-9ae2-2ed196e23be2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"70888dc2-5d2a-42b8-97d1-3e8bbf26f64e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig"}}
	{"specversion":"1.0","id":"a8d44b60-9cd2-4691-a0f6-728037c172e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube"}}
	{"specversion":"1.0","id":"12017e1f-8d26-4327-9e81-d5a7a00c1568","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"48a606ac-225d-48cb-bbe3-b09b1981b3d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"65d60e35-82b7-483d-bd7d-2321a31c5bf4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-183956" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-183956
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (82.08s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-266694 --driver=kvm2  --container-runtime=containerd
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-266694 --driver=kvm2  --container-runtime=containerd: (37.920083987s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-269120 --driver=kvm2  --container-runtime=containerd
E1219 03:04:44.847205    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-269120 --driver=kvm2  --container-runtime=containerd: (41.693138029s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-266694
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-269120
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:176: Cleaning up "second-269120" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p second-269120
helpers_test.go:176: Cleaning up "first-266694" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p first-266694
--- PASS: TestMinikubeProfile (82.08s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (20.56s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-248953 --memory=3072 --mount-string /tmp/TestMountStartserial1874655040/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-248953 --memory=3072 --mount-string /tmp/TestMountStartserial1874655040/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (19.556237144s)
--- PASS: TestMountStart/serial/StartWithMountFirst (20.56s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-248953 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-248953 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (20.92s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-265978 --memory=3072 --mount-string /tmp/TestMountStartserial1874655040/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd
E1219 03:05:34.379871    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-265978 --memory=3072 --mount-string /tmp/TestMountStartserial1874655040/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (19.920332579s)
--- PASS: TestMountStart/serial/StartWithMountSecond (20.92s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-265978 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-265978 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.30s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-248953 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-265978 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-265978 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-265978
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-265978: (1.283739935s)
--- PASS: TestMountStart/serial/Stop (1.28s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (19.43s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-265978
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-265978: (18.434636308s)
--- PASS: TestMountStart/serial/RestartStopped (19.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-265978 ssh -- ls /minikube-host
mount_start_test.go:147: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-265978 ssh -- findmnt --json /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.29s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (106.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534328 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
E1219 03:06:39.045370    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:07:47.897189    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534328 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m46.646645761s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (106.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-534328 -- rollout status deployment/busybox: (5.298968068s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-2zvvx -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-cqj7d -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-2zvvx -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-cqj7d -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-2zvvx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-cqj7d -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.92s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-2zvvx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-2zvvx -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-cqj7d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-534328 -- exec busybox-7b57f96db7-cqj7d -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.82s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-534328 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-534328 -v=5 --alsologtostderr: (43.633227277s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.09s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-534328 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.45s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status --output json --alsologtostderr
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp testdata/cp-test.txt multinode-534328:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1603788479/001/cp-test_multinode-534328.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328:/home/docker/cp-test.txt multinode-534328-m02:/home/docker/cp-test_multinode-534328_multinode-534328-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m02 "sudo cat /home/docker/cp-test_multinode-534328_multinode-534328-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328:/home/docker/cp-test.txt multinode-534328-m03:/home/docker/cp-test_multinode-534328_multinode-534328-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m03 "sudo cat /home/docker/cp-test_multinode-534328_multinode-534328-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp testdata/cp-test.txt multinode-534328-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1603788479/001/cp-test_multinode-534328-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328-m02:/home/docker/cp-test.txt multinode-534328:/home/docker/cp-test_multinode-534328-m02_multinode-534328.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328 "sudo cat /home/docker/cp-test_multinode-534328-m02_multinode-534328.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328-m02:/home/docker/cp-test.txt multinode-534328-m03:/home/docker/cp-test_multinode-534328-m02_multinode-534328-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m03 "sudo cat /home/docker/cp-test_multinode-534328-m02_multinode-534328-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp testdata/cp-test.txt multinode-534328-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1603788479/001/cp-test_multinode-534328-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328-m03:/home/docker/cp-test.txt multinode-534328:/home/docker/cp-test_multinode-534328-m03_multinode-534328.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328 "sudo cat /home/docker/cp-test_multinode-534328-m03_multinode-534328.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 cp multinode-534328-m03:/home/docker/cp-test.txt multinode-534328-m02:/home/docker/cp-test_multinode-534328-m03_multinode-534328-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 ssh -n multinode-534328-m02 "sudo cat /home/docker/cp-test_multinode-534328-m03_multinode-534328-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-534328 node stop m03: (1.39836805s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534328 status: exit status 7 (318.302795ms)

                                                
                                                
-- stdout --
	multinode-534328
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-534328-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-534328-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr: exit status 7 (318.796797ms)

                                                
                                                
-- stdout --
	multinode-534328
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-534328-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-534328-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:08:48.811103   30441 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:08:48.811209   30441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:08:48.811217   30441 out.go:374] Setting ErrFile to fd 2...
	I1219 03:08:48.811221   30441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:08:48.811380   30441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:08:48.811534   30441 out.go:368] Setting JSON to false
	I1219 03:08:48.811556   30441 mustload.go:66] Loading cluster: multinode-534328
	I1219 03:08:48.811690   30441 notify.go:221] Checking for updates...
	I1219 03:08:48.811914   30441 config.go:182] Loaded profile config "multinode-534328": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:08:48.811927   30441 status.go:174] checking status of multinode-534328 ...
	I1219 03:08:48.813653   30441 status.go:371] multinode-534328 host status = "Running" (err=<nil>)
	I1219 03:08:48.813669   30441 host.go:66] Checking if "multinode-534328" exists ...
	I1219 03:08:48.816450   30441 main.go:144] libmachine: domain multinode-534328 has defined MAC address 52:54:00:9c:c8:a6 in network mk-multinode-534328
	I1219 03:08:48.816884   30441 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:c8:a6", ip: ""} in network mk-multinode-534328: {Iface:virbr1 ExpiryTime:2025-12-19 04:06:16 +0000 UTC Type:0 Mac:52:54:00:9c:c8:a6 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:multinode-534328 Clientid:01:52:54:00:9c:c8:a6}
	I1219 03:08:48.816908   30441 main.go:144] libmachine: domain multinode-534328 has defined IP address 192.168.39.226 and MAC address 52:54:00:9c:c8:a6 in network mk-multinode-534328
	I1219 03:08:48.817091   30441 host.go:66] Checking if "multinode-534328" exists ...
	I1219 03:08:48.817298   30441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:08:48.819210   30441 main.go:144] libmachine: domain multinode-534328 has defined MAC address 52:54:00:9c:c8:a6 in network mk-multinode-534328
	I1219 03:08:48.819525   30441 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:9c:c8:a6", ip: ""} in network mk-multinode-534328: {Iface:virbr1 ExpiryTime:2025-12-19 04:06:16 +0000 UTC Type:0 Mac:52:54:00:9c:c8:a6 Iaid: IPaddr:192.168.39.226 Prefix:24 Hostname:multinode-534328 Clientid:01:52:54:00:9c:c8:a6}
	I1219 03:08:48.819544   30441 main.go:144] libmachine: domain multinode-534328 has defined IP address 192.168.39.226 and MAC address 52:54:00:9c:c8:a6 in network mk-multinode-534328
	I1219 03:08:48.819653   30441 sshutil.go:53] new ssh client: &{IP:192.168.39.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/multinode-534328/id_rsa Username:docker}
	I1219 03:08:48.905176   30441 ssh_runner.go:195] Run: systemctl --version
	I1219 03:08:48.911776   30441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:08:48.927508   30441 kubeconfig.go:125] found "multinode-534328" server: "https://192.168.39.226:8443"
	I1219 03:08:48.927541   30441 api_server.go:166] Checking apiserver status ...
	I1219 03:08:48.927572   30441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1219 03:08:48.947815   30441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup
	W1219 03:08:48.958821   30441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1424/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1219 03:08:48.958889   30441 ssh_runner.go:195] Run: ls
	I1219 03:08:48.963636   30441 api_server.go:253] Checking apiserver healthz at https://192.168.39.226:8443/healthz ...
	I1219 03:08:48.967975   30441 api_server.go:279] https://192.168.39.226:8443/healthz returned 200:
	ok
	I1219 03:08:48.967994   30441 status.go:463] multinode-534328 apiserver status = Running (err=<nil>)
	I1219 03:08:48.968002   30441 status.go:176] multinode-534328 status: &{Name:multinode-534328 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:08:48.968034   30441 status.go:174] checking status of multinode-534328-m02 ...
	I1219 03:08:48.969498   30441 status.go:371] multinode-534328-m02 host status = "Running" (err=<nil>)
	I1219 03:08:48.969514   30441 host.go:66] Checking if "multinode-534328-m02" exists ...
	I1219 03:08:48.971669   30441 main.go:144] libmachine: domain multinode-534328-m02 has defined MAC address 52:54:00:d5:fe:d1 in network mk-multinode-534328
	I1219 03:08:48.972040   30441 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d5:fe:d1", ip: ""} in network mk-multinode-534328: {Iface:virbr1 ExpiryTime:2025-12-19 04:07:17 +0000 UTC Type:0 Mac:52:54:00:d5:fe:d1 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:multinode-534328-m02 Clientid:01:52:54:00:d5:fe:d1}
	I1219 03:08:48.972069   30441 main.go:144] libmachine: domain multinode-534328-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:d5:fe:d1 in network mk-multinode-534328
	I1219 03:08:48.972188   30441 host.go:66] Checking if "multinode-534328-m02" exists ...
	I1219 03:08:48.972361   30441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1219 03:08:48.974317   30441 main.go:144] libmachine: domain multinode-534328-m02 has defined MAC address 52:54:00:d5:fe:d1 in network mk-multinode-534328
	I1219 03:08:48.974649   30441 main.go:144] libmachine: found host DHCP lease matching {name: "", mac: "52:54:00:d5:fe:d1", ip: ""} in network mk-multinode-534328: {Iface:virbr1 ExpiryTime:2025-12-19 04:07:17 +0000 UTC Type:0 Mac:52:54:00:d5:fe:d1 Iaid: IPaddr:192.168.39.188 Prefix:24 Hostname:multinode-534328-m02 Clientid:01:52:54:00:d5:fe:d1}
	I1219 03:08:48.974670   30441 main.go:144] libmachine: domain multinode-534328-m02 has defined IP address 192.168.39.188 and MAC address 52:54:00:d5:fe:d1 in network mk-multinode-534328
	I1219 03:08:48.974811   30441 sshutil.go:53] new ssh client: &{IP:192.168.39.188 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/22230-5003/.minikube/machines/multinode-534328-m02/id_rsa Username:docker}
	I1219 03:08:49.053668   30441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1219 03:08:49.069583   30441 status.go:176] multinode-534328-m02 status: &{Name:multinode-534328-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:08:49.069614   30441 status.go:174] checking status of multinode-534328-m03 ...
	I1219 03:08:49.071137   30441 status.go:371] multinode-534328-m03 host status = "Stopped" (err=<nil>)
	I1219 03:08:49.071156   30441 status.go:384] host is not running, skipping remaining checks
	I1219 03:08:49.071161   30441 status.go:176] multinode-534328-m03 status: &{Name:multinode-534328-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.04s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (36.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-534328 node start m03 -v=5 --alsologtostderr: (35.973064182s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (36.47s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (296.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-534328
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-534328
E1219 03:09:44.847175    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:10:34.384763    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:11:39.042172    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-534328: (2m59.041346936s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534328 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534328 --wait=true -v=5 --alsologtostderr: (1m57.474388656s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-534328
--- PASS: TestMultiNode/serial/RestartKeepsNodes (296.64s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-534328 node delete m03: (1.541718078s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.99s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (173.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 stop
E1219 03:14:42.088269    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:14:44.847157    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:15:34.384165    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:16:39.045936    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-534328 stop: (2m53.544371097s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534328 status: exit status 7 (60.191126ms)

                                                
                                                
-- stdout --
	multinode-534328
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-534328-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr: exit status 7 (60.479347ms)

                                                
                                                
-- stdout --
	multinode-534328
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-534328-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:17:17.832618   32738 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:17:17.832865   32738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:17:17.832873   32738 out.go:374] Setting ErrFile to fd 2...
	I1219 03:17:17.832877   32738 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:17:17.833095   32738 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:17:17.833249   32738 out.go:368] Setting JSON to false
	I1219 03:17:17.833270   32738 mustload.go:66] Loading cluster: multinode-534328
	I1219 03:17:17.833372   32738 notify.go:221] Checking for updates...
	I1219 03:17:17.833660   32738 config.go:182] Loaded profile config "multinode-534328": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:17:17.833676   32738 status.go:174] checking status of multinode-534328 ...
	I1219 03:17:17.835843   32738 status.go:371] multinode-534328 host status = "Stopped" (err=<nil>)
	I1219 03:17:17.835860   32738 status.go:384] host is not running, skipping remaining checks
	I1219 03:17:17.835866   32738 status.go:176] multinode-534328 status: &{Name:multinode-534328 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1219 03:17:17.835883   32738 status.go:174] checking status of multinode-534328-m02 ...
	I1219 03:17:17.837044   32738 status.go:371] multinode-534328-m02 host status = "Stopped" (err=<nil>)
	I1219 03:17:17.837058   32738 status.go:384] host is not running, skipping remaining checks
	I1219 03:17:17.837062   32738 status.go:176] multinode-534328-m02 status: &{Name:multinode-534328-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (173.67s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (76.08s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534328 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534328 --wait=true -v=5 --alsologtostderr --driver=kvm2  --container-runtime=containerd: (1m15.641938767s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-534328 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (76.08s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (39.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-534328
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534328-m02 --driver=kvm2  --container-runtime=containerd
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-534328-m02 --driver=kvm2  --container-runtime=containerd: exit status 14 (73.069929ms)

                                                
                                                
-- stdout --
	* [multinode-534328-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-534328-m02' is duplicated with machine name 'multinode-534328-m02' in profile 'multinode-534328'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-534328-m03 --driver=kvm2  --container-runtime=containerd
E1219 03:18:37.430147    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-534328-m03 --driver=kvm2  --container-runtime=containerd: (38.248647317s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-534328
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-534328: exit status 80 (200.782833ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-534328 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-534328-m03 already exists in multinode-534328-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-534328-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (39.37s)

                                                
                                    
x
+
TestPreload (142.64s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-894793 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd
E1219 03:19:44.847494    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:20:34.379827    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-894793 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd: (1m28.165308628s)
preload_test.go:49: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-894793 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-linux-amd64 -p test-preload-894793 image pull gcr.io/k8s-minikube/busybox: (4.450999268s)
preload_test.go:55: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-894793
preload_test.go:55: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-894793: (7.154441661s)
preload_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-894793 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd
preload_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-894793 --preload=true --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=containerd: (41.889852035s)
preload_test.go:68: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-894793 image list
helpers_test.go:176: Cleaning up "test-preload-894793" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-894793
--- PASS: TestPreload (142.64s)

                                                
                                    
x
+
TestScheduledStopUnix (110.82s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-796913 --memory=3072 --driver=kvm2  --container-runtime=containerd
E1219 03:21:39.043064    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-796913 --memory=3072 --driver=kvm2  --container-runtime=containerd: (39.306056694s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796913 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 03:22:16.723396   35320 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:22:16.723649   35320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:22:16.723658   35320 out.go:374] Setting ErrFile to fd 2...
	I1219 03:22:16.723662   35320 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:22:16.723851   35320 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:22:16.724080   35320 out.go:368] Setting JSON to false
	I1219 03:22:16.724160   35320 mustload.go:66] Loading cluster: scheduled-stop-796913
	I1219 03:22:16.724465   35320 config.go:182] Loaded profile config "scheduled-stop-796913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:22:16.724527   35320 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/config.json ...
	I1219 03:22:16.724712   35320 mustload.go:66] Loading cluster: scheduled-stop-796913
	I1219 03:22:16.724802   35320 config.go:182] Loaded profile config "scheduled-stop-796913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-796913 -n scheduled-stop-796913
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796913 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 03:22:16.999547   35365 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:22:16.999807   35365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:22:16.999818   35365 out.go:374] Setting ErrFile to fd 2...
	I1219 03:22:16.999825   35365 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:22:17.000056   35365 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:22:17.000300   35365 out.go:368] Setting JSON to false
	I1219 03:22:17.000515   35365 daemonize_unix.go:73] killing process 35355 as it is an old scheduled stop
	I1219 03:22:17.000617   35365 mustload.go:66] Loading cluster: scheduled-stop-796913
	I1219 03:22:17.000963   35365 config.go:182] Loaded profile config "scheduled-stop-796913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:22:17.001054   35365 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/config.json ...
	I1219 03:22:17.001243   35365 mustload.go:66] Loading cluster: scheduled-stop-796913
	I1219 03:22:17.001369   35365 config.go:182] Loaded profile config "scheduled-stop-796913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1219 03:22:17.006199    8978 retry.go:31] will retry after 51.355µs: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.007374    8978 retry.go:31] will retry after 198.187µs: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.008520    8978 retry.go:31] will retry after 276.935µs: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.009656    8978 retry.go:31] will retry after 266.73µs: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.010786    8978 retry.go:31] will retry after 309.911µs: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.011929    8978 retry.go:31] will retry after 424.429µs: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.013058    8978 retry.go:31] will retry after 605.397µs: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.014175    8978 retry.go:31] will retry after 1.985543ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.016362    8978 retry.go:31] will retry after 2.099692ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.019569    8978 retry.go:31] will retry after 3.842921ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.023755    8978 retry.go:31] will retry after 8.502212ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.032950    8978 retry.go:31] will retry after 10.787497ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.044201    8978 retry.go:31] will retry after 6.747735ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.051425    8978 retry.go:31] will retry after 27.699855ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
I1219 03:22:17.079636    8978 retry.go:31] will retry after 33.582407ms: open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796913 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-796913 -n scheduled-stop-796913
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-796913
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-796913 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1219 03:22:42.657525   35514 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:22:42.657626   35514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:22:42.657633   35514 out.go:374] Setting ErrFile to fd 2...
	I1219 03:22:42.657639   35514 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:22:42.657836   35514 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:22:42.658061   35514 out.go:368] Setting JSON to false
	I1219 03:22:42.658134   35514 mustload.go:66] Loading cluster: scheduled-stop-796913
	I1219 03:22:42.658459   35514 config.go:182] Loaded profile config "scheduled-stop-796913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:22:42.658528   35514 profile.go:143] Saving config to /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/scheduled-stop-796913/config.json ...
	I1219 03:22:42.658718   35514 mustload.go:66] Loading cluster: scheduled-stop-796913
	I1219 03:22:42.658815   35514 config.go:182] Loaded profile config "scheduled-stop-796913": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-796913
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-796913: exit status 7 (57.547487ms)

                                                
                                                
-- stdout --
	scheduled-stop-796913
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-796913 -n scheduled-stop-796913
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-796913 -n scheduled-stop-796913: exit status 7 (55.82179ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-796913" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-796913
--- PASS: TestScheduledStopUnix (110.82s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (147.73s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.35.0.3952436886 start -p running-upgrade-832483 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
E1219 03:24:27.898004    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.35.0.3952436886 start -p running-upgrade-832483 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (1m34.889689205s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-832483 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-832483 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (46.51114563s)
helpers_test.go:176: Cleaning up "running-upgrade-832483" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-832483
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-832483: (1.04166481s)
--- PASS: TestRunningBinaryUpgrade (147.73s)

                                                
                                    
x
+
TestKubernetesUpgrade (145.03s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (1m5.070967627s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-772060
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-772060: (1.667844323s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-772060 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-772060 status --format={{.Host}}: exit status 7 (68.540903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
E1219 03:24:44.846762    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (54.446734744s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-772060 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd: exit status 106 (95.116218ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-772060] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.35.0-rc.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-772060
	    minikube start -p kubernetes-upgrade-772060 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7720602 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.35.0-rc.1, by running:
	    
	    minikube start -p kubernetes-upgrade-772060 --kubernetes-version=v1.35.0-rc.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-772060 --memory=3072 --kubernetes-version=v1.35.0-rc.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (22.095255576s)
helpers_test.go:176: Cleaning up "kubernetes-upgrade-772060" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-772060
helpers_test.go:179: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-772060: (1.498493794s)
--- PASS: TestKubernetesUpgrade (145.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-768838 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-768838 --no-kubernetes --kubernetes-version=v1.28.0 --driver=kvm2  --container-runtime=containerd: exit status 14 (90.880151ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-768838] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (80.15s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-768838 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-768838 --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (1m19.890036896s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-768838 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (80.15s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (43.96s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-768838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-768838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (42.170683805s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-768838 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-768838 status -o json: exit status 2 (225.093718ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-768838","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-768838
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-768838: (1.562833347s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (43.96s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (23.67s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-768838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd
E1219 03:25:34.379673    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-768838 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=containerd: (23.673882611s)
--- PASS: TestNoKubernetes/serial/Start (23.67s)

                                                
                                    
x
+
TestISOImage/Setup (20.86s)

                                                
                                                
=== RUN   TestISOImage/Setup
iso_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p guest-269272 --no-kubernetes --driver=kvm2  --container-runtime=containerd
iso_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p guest-269272 --no-kubernetes --driver=kvm2  --container-runtime=containerd: (20.863007039s)
--- PASS: TestISOImage/Setup (20.86s)

                                                
                                    
x
+
TestPause/serial/Start (100.87s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-656299 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-656299 --memory=3072 --install-addons=false --wait=all --driver=kvm2  --container-runtime=containerd: (1m40.866382803s)
--- PASS: TestPause/serial/Start (100.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/22230-5003/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-768838 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-768838 "sudo systemctl is-active --quiet service kubelet": exit status 1 (179.446122ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-768838
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-768838: (1.479488048s)
--- PASS: TestNoKubernetes/serial/Stop (1.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.49s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-694633 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-694633 --memory=3072 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=containerd: exit status 14 (123.226994ms)

                                                
                                                
-- stdout --
	* [false-694633] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=22230
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1219 03:25:56.809375   38742 out.go:360] Setting OutFile to fd 1 ...
	I1219 03:25:56.809673   38742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:25:56.809685   38742 out.go:374] Setting ErrFile to fd 2...
	I1219 03:25:56.809691   38742 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1219 03:25:56.809971   38742 root.go:338] Updating PATH: /home/jenkins/minikube-integration/22230-5003/.minikube/bin
	I1219 03:25:56.810549   38742 out.go:368] Setting JSON to false
	I1219 03:25:56.811861   38742 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-5","uptime":4096,"bootTime":1766110661,"procs":194,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1045-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1219 03:25:56.811924   38742 start.go:143] virtualization: kvm guest
	I1219 03:25:56.813525   38742 out.go:179] * [false-694633] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1219 03:25:56.819501   38742 out.go:179]   - MINIKUBE_LOCATION=22230
	I1219 03:25:56.819530   38742 notify.go:221] Checking for updates...
	I1219 03:25:56.821692   38742 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1219 03:25:56.823304   38742 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/22230-5003/kubeconfig
	I1219 03:25:56.828183   38742 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/22230-5003/.minikube
	I1219 03:25:56.829194   38742 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1219 03:25:56.830142   38742 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1219 03:25:56.831587   38742 config.go:182] Loaded profile config "NoKubernetes-768838": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1219 03:25:56.831720   38742 config.go:182] Loaded profile config "guest-269272": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v0.0.0
	I1219 03:25:56.831805   38742 config.go:182] Loaded profile config "pause-656299": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
	I1219 03:25:56.831878   38742 driver.go:422] Setting default libvirt URI to qemu:///system
	I1219 03:25:56.864596   38742 out.go:179] * Using the kvm2 driver based on user configuration
	I1219 03:25:56.865567   38742 start.go:309] selected driver: kvm2
	I1219 03:25:56.865578   38742 start.go:928] validating driver "kvm2" against <nil>
	I1219 03:25:56.865587   38742 start.go:939] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1219 03:25:56.867139   38742 out.go:203] 
	W1219 03:25:56.868103   38742 out.go:285] X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	X Exiting due to MK_USAGE: The "containerd" container runtime requires CNI
	I1219 03:25:56.868969   38742 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-694633 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-694633" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-694633

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-694633"

                                                
                                                
----------------------- debugLogs end: false-694633 [took: 3.221846939s] --------------------------------
helpers_test.go:176: Cleaning up "false-694633" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p false-694633
--- PASS: TestNetworkPlugins/group/false (3.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (50.74s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-768838 --driver=kvm2  --container-runtime=containerd
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-768838 --driver=kvm2  --container-runtime=containerd: (50.742647226s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (50.74s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-768838 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-768838 "sudo systemctl is-active --quiet service kubelet": exit status 1 (165.948088ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 4

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (5.17s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (5.17s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (99.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.35.0.3175466618 start -p stopped-upgrade-417293 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.35.0.3175466618 start -p stopped-upgrade-417293 --memory=3072 --vm-driver=kvm2  --container-runtime=containerd: (1m6.801605049s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.35.0.3175466618 -p stopped-upgrade-417293 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.35.0.3175466618 -p stopped-upgrade-417293 stop: (1.397889613s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-417293 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-417293 --memory=3072 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (31.412954865s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (99.61s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (49.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-656299 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-656299 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=containerd: (49.217131593s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (49.24s)

                                                
                                    
x
+
TestPause/serial/Pause (0.85s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-656299 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.85s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-656299 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-656299 --output=json --layout=cluster: exit status 2 (253.177488ms)

                                                
                                                
-- stdout --
	{"Name":"pause-656299","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-656299","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.25s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-656299 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.66s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.95s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-656299 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (81.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=containerd: (1m21.295605083s)
--- PASS: TestNetworkPlugins/group/auto/Start (81.30s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.8s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-656299 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-656299 --alsologtostderr -v=5: (1.800317642s)
--- PASS: TestPause/serial/DeletePaused (1.80s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.53s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.533129665s)
--- PASS: TestPause/serial/VerifyDeletedResources (2.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (81.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=containerd: (1m21.204979911s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (81.21s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-417293
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-417293: (1.310002438s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd
E1219 03:29:44.847248    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=containerd: (1m44.244016551s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-694633 "pgrep -a kubelet"
I1219 03:29:49.717227    8978 config.go:182] Loaded profile config "auto-694633": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-694633 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-wrr46" [4129b835-1d47-4bbb-9744-69277e477eb4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-wrr46" [4129b835-1d47-4bbb-9744-69277e477eb4] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004787935s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-x4bbj" [e7d11856-d814-4412-8fae-943987364dec] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004583751s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-694633 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-694633 "pgrep -a kubelet"
I1219 03:30:00.656819    8978 config.go:182] Loaded profile config "kindnet-694633": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-694633 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7q45v" [6589c5fd-bd3f-40f4-ad03-05f9af2a5367] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7q45v" [6589c5fd-bd3f-40f4-ad03-05f9af2a5367] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004023246s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-694633 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (83.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=containerd: (1m23.156700353s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (83.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-vmgxt" [e461d757-83c6-4e35-9dfb-85e40e06c4ab] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-vmgxt" [e461d757-83c6-4e35-9dfb-85e40e06c4ab] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005086774s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (93.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=containerd: (1m33.607443905s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (93.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-694633 "pgrep -a kubelet"
I1219 03:30:27.366070    8978 config.go:182] Loaded profile config "calico-694633": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-694633 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-2mrx9" [c845fc9c-24d6-46ef-a60e-8da02ce88bc4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-2mrx9" [c845fc9c-24d6-46ef-a60e-8da02ce88bc4] Running
E1219 03:30:34.379898    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.005268438s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-694633 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (72.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd
E1219 03:31:22.089055    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=containerd: (1m12.584058821s)
--- PASS: TestNetworkPlugins/group/flannel/Start (72.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (86.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-694633 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=containerd: (1m26.721203389s)
--- PASS: TestNetworkPlugins/group/bridge/Start (86.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-694633 "pgrep -a kubelet"
I1219 03:31:37.739779    8978 config.go:182] Loaded profile config "custom-flannel-694633": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-694633 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-8npvs" [eef6b178-239b-44a5-ac75-ff2ed3305069] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1219 03:31:39.042247    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-8npvs" [eef6b178-239b-44a5-ac75-ff2ed3305069] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 9.004837891s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-694633 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-694633 "pgrep -a kubelet"
I1219 03:32:00.350447    8978 config.go:182] Loaded profile config "enable-default-cni-694633": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-694633 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-7xrrk" [af915ba0-e551-4930-acdf-6dbef22f0f0f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-7xrrk" [af915ba0-e551-4930-acdf-6dbef22f0f0f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004325215s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (100.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (1m40.484931444s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (100.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-gs4bw" [c130b49a-017f-4b63-9bd9-ef413fb5dfbf] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005811926s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-694633 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-694633 "pgrep -a kubelet"
I1219 03:32:15.076152    8978 config.go:182] Loaded profile config "flannel-694633": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-694633 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-xcxxk" [43de1681-322b-486a-8095-a24a380ff94e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-xcxxk" [43de1681-322b-486a-8095-a24a380ff94e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 10.005967849s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (10.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-694633 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (93.13s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (1m33.127126752s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (93.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (95.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3: (1m35.021916718s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (95.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-694633 "pgrep -a kubelet"
I1219 03:32:57.490326    8978 config.go:182] Loaded profile config "bridge-694633": Driver=kvm2, ContainerRuntime=containerd, KubernetesVersion=v1.34.3
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-694633 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-t7x7f" [f97fd550-e1b1-4ecc-8a53-afb0e0e49bb6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-t7x7f" [f97fd550-e1b1-4ecc-8a53-afb0e0e49bb6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.002905683s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-694633 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-694633 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.05s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3: (1m30.048440282s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (90.05s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (12.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-638861 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [be434df5-3dc3-46ee-afab-a94b9048072e] Pending
helpers_test.go:353: "busybox" [be434df5-3dc3-46ee-afab-a94b9048072e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [be434df5-3dc3-46ee-afab-a94b9048072e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 12.003855259s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-638861 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (12.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-638861 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.037631301s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-638861 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (81.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-638861 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-638861 --alsologtostderr -v=3: (1m21.349186929s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (81.35s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (13.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-728806 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8b96b916-9a7f-4d6d-9e31-aad0b0358a6b] Pending
helpers_test.go:353: "busybox" [8b96b916-9a7f-4d6d-9e31-aad0b0358a6b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [8b96b916-9a7f-4d6d-9e31-aad0b0358a6b] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 13.003696056s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-728806 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (13.28s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-728806 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-728806 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.04s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (72s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-728806 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-728806 --alsologtostderr -v=3: (1m11.995586931s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (72.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-832734 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1da2ff6b-f366-4fcb-9aff-6f252b564072] Pending
helpers_test.go:353: "busybox" [1da2ff6b-f366-4fcb-9aff-6f252b564072] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1da2ff6b-f366-4fcb-9aff-6f252b564072] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 12.003246937s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-832734 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (12.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-832734 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-832734 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (85.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-832734 --alsologtostderr -v=3
E1219 03:34:44.847134    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/addons-925443/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:49.963299    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:49.968580    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:49.978851    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:49.999100    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:50.039385    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:50.119756    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:50.280527    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:50.601211    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:51.242145    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:52.522783    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-832734 --alsologtostderr -v=3: (1m25.205692622s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (85.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 create -f testdata/busybox.yaml
E1219 03:34:54.463702    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:54.469098    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:54.479599    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
E1219 03:34:54.500072    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [a15483a8-253a-46ba-89cf-a7281f75888f] Pending
E1219 03:34:54.541041    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:54.621413    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:54.781872    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:55.083688    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:55.102844    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [a15483a8-253a-46ba-89cf-a7281f75888f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1219 03:34:55.743865    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:57.024302    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:34:59.585228    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:00.204804    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [a15483a8-253a-46ba-89cf-a7281f75888f] Running
E1219 03:35:04.705417    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 12.004454296s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (12.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-382606 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-382606 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (81.71s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-382606 --alsologtostderr -v=3
E1219 03:35:10.445904    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:14.946078    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:17.430711    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-382606 --alsologtostderr -v=3: (1m21.706577474s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (81.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-638861 -n old-k8s-version-638861
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-638861 -n old-k8s-version-638861: exit status 7 (59.760571ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-638861 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (58.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0
E1219 03:35:21.169499    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:21.174776    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:21.185043    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:21.205395    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:21.245824    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:21.326167    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:21.486693    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:21.806889    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:22.447240    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:23.727689    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-638861 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.28.0: (57.854257265s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-638861 -n old-k8s-version-638861
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (58.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-728806 -n no-preload-728806
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-728806 -n no-preload-728806: exit status 7 (59.533183ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-728806 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (58.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1219 03:35:26.288759    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:30.926269    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:31.408960    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:34.380642    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-509202/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:35.427126    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:35:41.649259    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-728806 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (58.412154692s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-728806 -n no-preload-728806
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (58.68s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832734 -n embed-certs-832734
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832734 -n embed-certs-832734: exit status 7 (62.784212ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-832734 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (59.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3
E1219 03:36:02.129528    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:11.887054    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/auto-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:16.387532    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/kindnet-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-832734 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3: (58.975473171s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-832734 -n embed-certs-832734
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (59.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606: exit status 7 (85.290993ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-382606 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3
E1219 03:36:37.985793    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:37.991143    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:38.001539    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:38.021910    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:38.062311    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:38.142685    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:38.302851    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:38.623126    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:39.042541    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/functional-991175/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:39.263983    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:40.544641    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:43.090624    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:43.105142    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:36:48.225883    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/custom-flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-382606 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.34.3: (1m1.31378896s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (61.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-638861 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-638861 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-638861 --alsologtostderr -v=1: (1.000934453s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-638861 -n old-k8s-version-638861
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-638861 -n old-k8s-version-638861: exit status 2 (218.424633ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-638861 -n old-k8s-version-638861
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-638861 -n old-k8s-version-638861: exit status 2 (229.681756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-638861 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-638861 -n old-k8s-version-638861
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-638861 -n old-k8s-version-638861
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (47.37s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-979595 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-979595 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (47.373212133s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (47.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-728806 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-728806 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p no-preload-728806 --alsologtostderr -v=1: (1.031051591s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-728806 -n no-preload-728806
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-728806 -n no-preload-728806: exit status 2 (220.414735ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-728806 -n no-preload-728806
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-728806 -n no-preload-728806: exit status 2 (239.520271ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-728806 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-728806 -n no-preload-728806
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-728806 -n no-preload-728806
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-832734 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-832734 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p embed-certs-832734 --alsologtostderr -v=1: (1.045189766s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832734 -n embed-certs-832734
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832734 -n embed-certs-832734: exit status 2 (223.270593ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-832734 -n embed-certs-832734
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-832734 -n embed-certs-832734: exit status 2 (221.167361ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-832734 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-832734 -n embed-certs-832734
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-832734 -n embed-certs-832734
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.99s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-979595 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.96s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (85.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-979595 --alsologtostderr -v=3
E1219 03:55:21.169450    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/calico-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-979595 --alsologtostderr -v=3: (1m25.410732884s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (85.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-382606 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: library/kong:3.9
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-382606 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606: exit status 2 (205.194253ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606: exit status 2 (212.35662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-382606 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-382606 -n default-k8s-diff-port-382606
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-979595 -n newest-cni-979595
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-979595 -n newest-cni-979595: exit status 7 (58.921211ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-979595 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.14s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (39.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-979595 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1
E1219 03:57:00.584670    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/enable-default-cni-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1219 03:57:08.866268    8978 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/flannel-694633/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-979595 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=containerd --kubernetes-version=v1.35.0-rc.1: (39.506034976s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-979595 -n newest-cni-979595
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (39.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-979595 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-979595 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-979595 -n newest-cni-979595
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-979595 -n newest-cni-979595: exit status 2 (204.991647ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-979595 -n newest-cni-979595
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-979595 -n newest-cni-979595: exit status 2 (204.69941ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-979595 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-979595 -n newest-cni-979595
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-979595 -n newest-cni-979595
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.39s)

                                                
                                    

Test skip (51/437)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
7 TestDownloadOnly/v1.28.0/kubectl 0
14 TestDownloadOnly/v1.34.3/cached-images 0
15 TestDownloadOnly/v1.34.3/binaries 0
16 TestDownloadOnly/v1.34.3/kubectl 0
23 TestDownloadOnly/v1.35.0-rc.1/cached-images 0
24 TestDownloadOnly/v1.35.0-rc.1/binaries 0
25 TestDownloadOnly/v1.35.0-rc.1/kubectl 0
29 TestDownloadOnlyKic 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
49 TestAddons/parallel/Olm 0
56 TestAddons/parallel/AmdGpuDevicePlugin 0
60 TestDockerFlags 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
116 TestFunctional/parallel/DockerEnv 0
117 TestFunctional/parallel/PodmanEnv 0
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
134 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
135 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
136 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
137 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
138 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
139 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
209 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv 0
223 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
224 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel 0.01
225 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService 0.01
226 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect 0.01
227 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
228 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
229 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
230 TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel 0.01
258 TestGvisorAddon 0
280 TestImageBuild 0
308 TestKicCustomNetwork 0
309 TestKicExistingNetwork 0
310 TestKicCustomSubnet 0
311 TestKicStaticIP 0
343 TestChangeNoneUser 0
346 TestScheduledStopWindows 0
348 TestSkaffold 0
350 TestInsufficientStorage 0
354 TestMissingContainerUpgrade 0
362 TestNetworkPlugins/group/kubenet 3.86
378 TestNetworkPlugins/group/cilium 3.54
384 TestStartStop/group/disable-driver-mounts 0.2
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.3/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-rc.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-rc.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.35.0-rc.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:219: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1035: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with containerd false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-rc.1/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as containerd container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-694633 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-694633" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/22230-5003/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Dec 2025 03:25:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.72.238:8443
name: running-upgrade-832483
contexts:
- context:
cluster: running-upgrade-832483
extensions:
- extension:
last-update: Fri, 19 Dec 2025 03:25:53 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-832483
name: running-upgrade-832483
current-context: running-upgrade-832483
kind: Config
users:
- name: running-upgrade-832483
user:
client-certificate: /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/running-upgrade-832483/client.crt
client-key: /home/jenkins/minikube-integration/22230-5003/.minikube/profiles/running-upgrade-832483/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-694633

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-694633"

                                                
                                                
----------------------- debugLogs end: kubenet-694633 [took: 3.677934199s] --------------------------------
helpers_test.go:176: Cleaning up "kubenet-694633" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-694633
--- SKIP: TestNetworkPlugins/group/kubenet (3.86s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-694633 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-694633" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-694633

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-694633" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-694633"

                                                
                                                
----------------------- debugLogs end: cilium-694633 [took: 3.369513485s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-694633" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-694633
--- SKIP: TestNetworkPlugins/group/cilium (3.54s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-477416" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-477416
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
Copied to clipboard